Phonological Coding in Good and Poor Readers.
ERIC Educational Resources Information Center
Briggs, Pamela; Underwood, Geoffrey
1982-01-01
A set of four experiments investigates the relationship between phonological coding and reading ability, using a picture-word interference task and a decoding task. Results with regard to both adults and children suggest that while poor readers possess weak decoding skills, good and poor readers show equivalent evidence of direct semantic and…
Bounds on Block Error Probability for Multilevel Concatenated Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Moorthy, Hari T.; Stojanovic, Diana
1996-01-01
Maximum likelihood decoding of long block codes is not feasable due to large complexity. Some classes of codes are shown to be decomposable into multilevel concatenated codes (MLCC). For these codes, multistage decoding provides good trade-off between performance and complexity. In this paper, we derive an upper bound on the probability of block error for MLCC. We use this bound to evaluate difference in performance for different decompositions of some codes. Examples given show that a significant reduction in complexity can be achieved when increasing number of stages of decoding. Resulting performance degradation varies for different decompositions. A guideline is given for finding good m-level decompositions.
Protograph based LDPC codes with minimum distance linearly growing with block size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.
Design of ACM system based on non-greedy punctured LDPC codes
NASA Astrophysics Data System (ADS)
Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng
2017-08-01
In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.
Harris, Margaret; Moreno, Constanza
2006-01-01
Nine children with severe-profound prelingual hearing loss and single-word reading scores not more than 10 months behind chronological age (Good Readers) were matched with 9 children whose reading lag was at least 15 months (Poor Readers). Good Readers had significantly higher spelling and reading comprehension scores. They produced significantly more phonetic errors (indicating the use of phonological coding) and more often correctly represented the number of syllables in spelling than Poor Readers. They also scored more highly on orthographic awareness and were better at speech reading. Speech intelligibility was the same in the two groups. Cluster analysis revealed that only three Good Readers showed strong evidence of phonetic coding in spelling although seven had good representation of syllables; only four had high orthographic awareness scores. However, all 9 children were good speech readers, suggesting that a phonological code derived through speech reading may underpin reading success for deaf children.
Low-density parity-check codes for volume holographic memory systems.
Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali
2003-02-10
We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.
Discussion on LDPC Codes and Uplink Coding
NASA Technical Reports Server (NTRS)
Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio
2007-01-01
This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.
Comparison of two computer codes for crack growth analysis: NASCRAC Versus NASA/FLAGRO
NASA Technical Reports Server (NTRS)
Stallworth, R.; Meyers, C. A.; Stinson, H. C.
1989-01-01
Results are presented from the comparison study of two computer codes for crack growth analysis - NASCRAC and NASA/FLAGRO. The two computer codes gave compatible conservative results when the part through crack analysis solutions were analyzed versus experimental test data. Results showed good correlation between the codes for the through crack at a lug solution. For the through crack at a lug solution, NASA/FLAGRO gave the most conservative results.
Lowe, Jeanne R; Raugi, Gregory J; Reiber, Gayle E; Whitney, Joanne D
2013-01-01
The purpose of this cohort study was to evaluate the effect of a 1-year intervention of an electronic medical record wound care template on the completeness of wound care documentation and medical coding compared to a similar time interval for the fiscal year preceding the intervention. From October 1, 2006, to September 30, 2007, a "good wound care" intervention was implemented at a rural Veterans Affairs facility to prevent amputations in veterans with diabetes and foot ulcers. The study protocol included a template with foot ulcer variables embedded in the electronic medical record to facilitate data collection, support clinical decision making, and improve ordering and medical coding. The intervention group showed significant differences in complete documentation of good wound care compared to the historic control group (χ = 15.99, P < .001), complete documentation of coding for diagnoses and procedures (χ = 30.23, P < .001), and complete documentation of both good wound care and coding for diagnoses and procedures (χ = 14.96, P < .001). An electronic wound care template improved documentation of evidence-based interventions and facilitated coding for wound complexity and procedures.
Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862
Cardinality enhancement utilizing Sequential Algorithm (SeQ) code in OCDMA system
NASA Astrophysics Data System (ADS)
Fazlina, C. A. S.; Rashidi, C. B. M.; Rahman, A. K.; Aljunid, S. A.
2017-11-01
Optical Code Division Multiple Access (OCDMA) has been important with increasing demand for high capacity and speed for communication in optical networks because of OCDMA technique high efficiency that can be achieved, hence fibre bandwidth is fully used. In this paper we will focus on Sequential Algorithm (SeQ) code with AND detection technique using Optisystem design tool. The result revealed SeQ code capable to eliminate Multiple Access Interference (MAI) and improve Bit Error Rate (BER), Phase Induced Intensity Noise (PIIN) and orthogonally between users in the system. From the results, SeQ shows good performance of BER and capable to accommodate 190 numbers of simultaneous users contrast with existing code. Thus, SeQ code have enhanced the system about 36% and 111% of FCC and DCS code. In addition, SeQ have good BER performance 10-25 at 155 Mbps in comparison with 622 Mbps, 1 Gbps and 2 Gbps bit rate. From the plot graph, 155 Mbps bit rate is suitable enough speed for FTTH and LAN networks. Resolution can be made based on the superior performance of SeQ code. Thus, these codes will give an opportunity in OCDMA system for better quality of service in an optical access network for future generation's usage
Multi-dimensional free-electron laser simulation codes : a comparison study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biedron, S. G.; Chae, Y. C.; Dejus, R. J.
A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.
Multi-Dimensional Free-Electron Laser Simulation Codes: A Comparison Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuhn, Heinz-Dieter
A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.
The APS SASE FEL : modeling and code comparison.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biedron, S. G.
A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.
Simulation of Hypervelocity Impact on Aluminum-Nextel-Kevlar Orbital Debris Shields
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.
2000-01-01
An improved hybrid particle-finite element method has been developed for hypervelocity impact simulation. The method combines the general contact-impact capabilities of particle codes with the true Lagrangian kinematics of large strain finite element formulations. Unlike some alternative schemes which couple Lagrangian finite element models with smooth particle hydrodynamics, the present formulation makes no use of slidelines or penalty forces. The method has been implemented in a parallel, three dimensional computer code. Simulations of three dimensional orbital debris impact problems using this parallel hybrid particle-finite element code, show good agreement with experiment and good speedup in parallel computation. The simulations included single and multi-plate shields as well as aluminum and composite shielding materials. at an impact velocity of eleven kilometers per second.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1992-01-01
Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.
Spatially coupled low-density parity-check error correction for holographic data storage
NASA Astrophysics Data System (ADS)
Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro
2017-09-01
The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.
NASA Technical Reports Server (NTRS)
Coakley, T. J.; Hsieh, T.
1985-01-01
Numerical simulation of steady and unsteady transonic diffuser flows using two different computer codes are discussed and compared with experimental data. The codes solve the Reynolds-averaged, compressible, Navier-Stokes equations using various turbulence models. One of the codes has been applied extensively to diffuser flows and uses the hybrid method of MacCormack. This code is relatively inefficient numerically. The second code, which was developed more recently, is fully implicit and is relatively efficient numerically. Simulations of steady flows using the implicit code are shown to be in good agreement with simulations using the hybrid code. Both simulations are in good agreement with experimental results. Simulations of unsteady flows using the two codes are in good qualitative agreement with each other, although the quantitative agreement is not as good as in the steady flow cases. The implicit code is shown to be eight times faster than the hybrid code for unsteady flow calculations and up to 32 times faster for steady flow calculations. Results of calculations using alternative turbulence models are also discussed.
Rangachari, Pavani
2008-01-01
CONTEXT/PURPOSE: With the growing momentum toward hospital quality measurement and reporting by public and private health care payers, hospitals face increasing pressures to improve their medical record documentation and administrative data coding accuracy. This study explores the relationship between the organizational knowledge-sharing structure related to quality and hospital coding accuracy for quality measurement. Simultaneously, this study seeks to identify other leadership/management characteristics associated with coding for quality measurement. Drawing upon complexity theory, the literature on "professional complex systems" has put forth various strategies for managing change and turnaround in professional organizations. In so doing, it has emphasized the importance of knowledge creation and organizational learning through interdisciplinary networks. This study integrates complexity, network structure, and "subgoals" theories to develop a framework for knowledge-sharing network effectiveness in professional complex systems. This framework is used to design an exploratory and comparative research study. The sample consists of 4 hospitals, 2 showing "good coding" accuracy for quality measurement and 2 showing "poor coding" accuracy. Interviews and surveys are conducted with administrators and staff in the quality, medical staff, and coding subgroups in each facility. Findings of this study indicate that good coding performance is systematically associated with a knowledge-sharing network structure rich in brokerage and hierarchy (with leaders connecting different professional subgroups to each other and to the external environment), rather than in density (where everyone is directly connected to everyone else). It also implies that for the hospital organization to adapt to the changing environment of quality transparency, senior leaders must undertake proactive and unceasing efforts to coordinate knowledge exchange across physician and coding subgroups and connect these subgroups with the changing external environment.
Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More
NASA Technical Reports Server (NTRS)
Kou, Yu; Lin, Shu; Fossorier, Marc
1999-01-01
Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.
Modeling of the EAST ICRF antenna with ICANT Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin Chengming; Zhao Yanping; Colas, L.
2007-09-28
A Resonant Double Loop (RDL) antenna for ion-cyclotron range of frequencies (ICRF) on Experimental Advanced Superconducting Tokamak (EAST) is under construction. The new antenna is analyzed using the antenna coupling code ICANT which self-consistently determines the surface currents on all antenna parts. In this work, the modeling of the new ICRF antenna using this code is to assess the near-fields in front of the antenna and analysis its coupling capabilities. Moreover, the antenna reactive radiated power computed by ICANT and shows a good agreement with deduced from Transmission Line (TL) theory.
Modeling of the EAST ICRF antenna with ICANT Code
NASA Astrophysics Data System (ADS)
Qin, Chengming; Zhao, Yanping; Colas, L.; Heuraux, S.
2007-09-01
A Resonant Double Loop (RDL) antenna for ion-cyclotron range of frequencies (ICRF) on Experimental Advanced Superconducting Tokamak (EAST) is under construction. The new antenna is analyzed using the antenna coupling code ICANT which self-consistently determines the surface currents on all antenna parts. In this work, the modeling of the new ICRF antenna using this code is to assess the near-fields in front of the antenna and analysis its coupling capabilities. Moreover, the antenna reactive radiated power computed by ICANT and shows a good agreement with deduced from Transmission Line (TL) theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makwana, K. D., E-mail: kirit.makwana@gmx.com; Cattaneo, F.; Zhdankin, V.
Simulations of decaying magnetohydrodynamic (MHD) turbulence are performed with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k{sub ⊥}{sup −1.3}. The kinetic code shows a spectral slope of k{submore » ⊥}{sup −1.5} for smaller simulation domain, and k{sub ⊥}{sup −1.3} for larger domain. We estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. This work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less
A good performance watermarking LDPC code used in high-speed optical fiber communication system
NASA Astrophysics Data System (ADS)
Zhang, Wenbo; Li, Chao; Zhang, Xiaoguang; Xi, Lixia; Tang, Xianfeng; He, Wenxue
2015-07-01
A watermarking LDPC code, which is a strategy designed to improve the performance of the traditional LDPC code, was introduced. By inserting some pre-defined watermarking bits into original LDPC code, we can obtain a more correct estimation about the noise level in the fiber channel. Then we use them to modify the probability distribution function (PDF) used in the initial process of belief propagation (BP) decoding algorithm. This algorithm was tested in a 128 Gb/s PDM-DQPSK optical communication system and results showed that the watermarking LDPC code had a better tolerances to polarization mode dispersion (PMD) and nonlinearity than that of traditional LDPC code. Also, by losing about 2.4% of redundancy for watermarking bits, the decoding efficiency of the watermarking LDPC code is about twice of the traditional one.
Topological color codes on Union Jack lattices: a stable implementation of the whole Clifford group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katzgraber, Helmut G.; Theoretische Physik, ETH Zurich, CH-8093 Zurich; Bombin, H.
We study the error threshold of topological color codes on Union Jack lattices that allow for the full implementation of the whole Clifford group of quantum gates. After mapping the error-correction process onto a statistical mechanical random three-body Ising model on a Union Jack lattice, we compute its phase diagram in the temperature-disorder plane using Monte Carlo simulations. Surprisingly, topological color codes on Union Jack lattices have a similar error stability to color codes on triangular lattices, as well as to the Kitaev toric code. The enhanced computational capabilities of the topological color codes on Union Jack lattices with respectmore » to triangular lattices and the toric code combined with the inherent robustness of this implementation show good prospects for future stable quantum computer implementations.« less
NASA Astrophysics Data System (ADS)
Lahaye, S.; Huynh, T. D.; Tsilanizara, A.
2016-03-01
Uncertainty quantification of interest outputs in nuclear fuel cycle is an important issue for nuclear safety, from nuclear facilities to long term deposits. Most of those outputs are functions of the isotopic vector density which is estimated by fuel cycle codes, such as DARWIN/PEPIN2, MENDEL, ORIGEN or FISPACT. CEA code systems DARWIN/PEPIN2 and MENDEL propagate by two different methods the uncertainty from nuclear data inputs to isotopic concentrations and decay heat. This paper shows comparisons between those two codes on a Uranium-235 thermal fission pulse. Effects of nuclear data evaluation's choice (ENDF/B-VII.1, JEFF-3.1.1 and JENDL-2011) is inspected in this paper. All results show good agreement between both codes and methods, ensuring the reliability of both approaches for a given evaluation.
Hu, Yu; Zylberberg, Joel; Shea-Brown, Eric
2014-01-01
Over repeat presentations of the same stimulus, sensory neurons show variable responses. This “noise” is typically correlated between pairs of cells, and a question with rich history in neuroscience is how these noise correlations impact the population's ability to encode the stimulus. Here, we consider a very general setting for population coding, investigating how information varies as a function of noise correlations, with all other aspects of the problem – neural tuning curves, etc. – held fixed. This work yields unifying insights into the role of noise correlations. These are summarized in the form of theorems, and illustrated with numerical examples involving neurons with diverse tuning curves. Our main contributions are as follows. (1) We generalize previous results to prove a sign rule (SR) — if noise correlations between pairs of neurons have opposite signs vs. their signal correlations, then coding performance will improve compared to the independent case. This holds for three different metrics of coding performance, and for arbitrary tuning curves and levels of heterogeneity. This generality is true for our other results as well. (2) As also pointed out in the literature, the SR does not provide a necessary condition for good coding. We show that a diverse set of correlation structures can improve coding. Many of these violate the SR, as do experimentally observed correlations. There is structure to this diversity: we prove that the optimal correlation structures must lie on boundaries of the possible set of noise correlations. (3) We provide a novel set of necessary and sufficient conditions, under which the coding performance (in the presence of noise) will be as good as it would be if there were no noise present at all. PMID:24586128
Image compression using quad-tree coding with morphological dilation
NASA Astrophysics Data System (ADS)
Wu, Jiaji; Jiang, Weiwei; Jiao, Licheng; Wang, Lei
2007-11-01
In this paper, we propose a new algorithm which integrates morphological dilation operation to quad-tree coding, the purpose of doing this is to compensate each other's drawback by using quad-tree coding and morphological dilation operation respectively. New algorithm can not only quickly find the seed significant coefficient of dilation but also break the limit of block boundary of quad-tree coding. We also make a full use of both within-subband and cross-subband correlation to avoid the expensive cost of representing insignificant coefficients. Experimental results show that our algorithm outperforms SPECK and SPIHT. Without using any arithmetic coding, our algorithm can achieve good performance with low computational cost and it's more suitable to mobile devices or scenarios with a strict real-time requirement.
The NASA-LeRC wind turbine sound prediction code
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1981-01-01
Development of the wind turbine sound prediction code began as part of an effort understand and reduce the noise generated by Mod-1. Tone sound levels predicted with this code are in good agreement with measured data taken in the vicinity Mod-1 wind turbine (less than 2 rotor diameters). Comparison in the far field indicates that propagation effects due to terrain and atmospheric conditions may amplify the actual sound levels by 6 dB. Parametric analysis using the code shows that the predominant contributors to Mod-1 rotor noise are (1) the velocity deficit in the wake of the support tower, (2) the high rotor speed, and (3) off-optimum operation.
Absorptive coding metasurface for further radar cross section reduction
NASA Astrophysics Data System (ADS)
Sui, Sai; Ma, Hua; Wang, Jiafu; Pang, Yongqiang; Feng, Mingde; Xu, Zhuo; Qu, Shaobo
2018-02-01
Lossless coding metasurfaces and metamaterial absorbers have been widely used for radar cross section (RCS) reduction and stealth applications, which merely depend on redirecting electromagnetic wave energy into various oblique angles or absorbing electromagnetic energy, respectively. Here, an absorptive coding metasurface capable of both the flexible manipulation of backward scattering and further wideband bistatic RCS reduction is proposed. The original idea is carried out by utilizing absorptive elements, such as metamaterial absorbers, to establish a coding metasurface. We establish an analytical connection between an arbitrary absorptive coding metasurface arrangement of both the amplitude and phase and its far-field pattern. Then, as an example, an absorptive coding metasurface is demonstrated as a nonperiodic metamaterial absorber, which indicates an expected better performance of RCS reduction than the traditional lossless coding metasurface and periodic metamaterial-absorber. Both theoretical analysis and full-wave simulation results show good accordance with the experiment.
Un-collided-flux preconditioning for the first order transport equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rigley, M.; Koebbe, J.; Drumm, C.
2013-07-01
Two codes were tested for the first order neutron transport equation using finite element methods. The un-collided-flux solution is used as a preconditioner for each of these methods. These codes include a least squares finite element method and a discontinuous finite element method. The performance of each code is shown on problems in one and two dimensions. The un-collided-flux preconditioner shows good speedup on each of the given methods. The un-collided-flux preconditioner has been used on the second-order equation, and here we extend those results to the first order equation. (authors)
Makwana, K. D.; Zhdankin, V.; Li, H.; ...
2015-04-10
We performed simulations of decaying magnetohydrodynamic (MHD) turbulence with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k-1.3⊥k⊥-1.3. The kinetic code shows a spectral slope of k-1.5⊥k⊥-1.5 for smallermore » simulation domain, and k-1.3⊥k⊥-1.3 for larger domain. We then estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. Finally, this work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makwana, K. D.; Zhdankin, V.; Li, H.
We performed simulations of decaying magnetohydrodynamic (MHD) turbulence with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k-1.3⊥k⊥-1.3. The kinetic code shows a spectral slope of k-1.5⊥k⊥-1.5 for smallermore » simulation domain, and k-1.3⊥k⊥-1.3 for larger domain. We then estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. Finally, this work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less
NASA Technical Reports Server (NTRS)
Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu
1997-01-01
The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.
Initial verification and validation of RAZORBACK - A research reactor transient analysis code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talley, Darren G.
2015-09-01
This report describes the work and results of the initial verification and validation (V&V) of the beta release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This initial V&V effort was intended to confirm that the code work to-date shows good agreement between simulation and actualmore » ACRR operations, indicating that the subsequent V&V effort for the official release of the code will be successful.« less
Influence of flowfield and vehicle parameters on engineering aerothermal methods
NASA Technical Reports Server (NTRS)
Wurster, Kathryn E.; Zoby, E. Vincent; Thompson, Richard A.
1989-01-01
The reliability and flexibility of three engineering codes used in the aerosphace industry (AEROHEAT, INCHES, and MINIVER) were investigated by comparing the results of these codes with Reentry F flight data and ground-test heat-transfer data for a range of cone angles, and with the predictions obtained using the detailed VSL3D code; the engineering solutions were also compared. In particular, the impact of several vehicle and flow-field parameters on the heat transfer and the capability of the engineering codes to predict these results were determined. It was found that entropy, pressure gradient, nose bluntness, gas chemistry, and angle of attack all affect heating levels. A comparison of the results of the three engineering codes with Reentry F flight data and with the predictions obtained of the VSL3D code showed a very good agreement in the regions of the applicability of the codes. It is emphasized that the parameters used in this study can significantly influence the actual heating levels and the prediction capability of a code.
H.264 Layered Coded Video over Wireless Networks: Channel Coding and Modulation Constraints
NASA Astrophysics Data System (ADS)
Ghandi, M. M.; Barmada, B.; Jones, E. V.; Ghanbari, M.
2006-12-01
This paper considers the prioritised transmission of H.264 layered coded video over wireless channels. For appropriate protection of video data, methods such as prioritised forward error correction coding (FEC) or hierarchical quadrature amplitude modulation (HQAM) can be employed, but each imposes system constraints. FEC provides good protection but at the price of a high overhead and complexity. HQAM is less complex and does not introduce any overhead, but permits only fixed data ratios between the priority layers. Such constraints are analysed and practical solutions are proposed for layered transmission of data-partitioned and SNR-scalable coded video where combinations of HQAM and FEC are used to exploit the advantages of both coding methods. Simulation results show that the flexibility of SNR scalability and absence of picture drift imply that SNR scalability as modelled is superior to data partitioning in such applications.
Correlated Errors in the Surface Code
NASA Astrophysics Data System (ADS)
Lopez, Daniel; Mucciolo, E. R.; Novais, E.
2012-02-01
A milestone step into the development of quantum information technology would be the ability to design and operate a reliable quantum memory. The greatest obstacle to create such a device has been decoherence due to the unavoidable interaction between the quantum system and its environment. Quantum Error Correction is therefore an essential ingredient to any quantum computing information device. A great deal of attention has been given to surface codes, since it has very good scaling properties. In this seminar, we discuss the time evolution of a qubit encoded in the logical basis of a surface code. The system is interacting with a bosonic environment at zero temperature. Our results show how much spatial and time correlations can be detrimental to the efficiency of the code.
Validation of CFD Codes for Parawing Geometries in Subsonic to Supersonic Flows
NASA Technical Reports Server (NTRS)
Cruz-Ayoroa, Juan G.; Garcia, Joseph A.; Melton, John E.
2014-01-01
Computational Fluid Dynamic studies of a rigid parawing at Mach numbers from 0.8 to 4.65 were carried out using three established inviscid, viscous and independent panel method codes. Pressure distributions along four chordwise sections of the wing were compared to experimental wind tunnel data gathered from NASA technical reports. Results show good prediction of the overall trends and magnitudes of the pressure distributions for the inviscid and viscous solvers. Pressure results for the panel method code diverge from test data at large angles of attack due to shock interaction phenomena. Trends in the flow behavior and their effect on the integrated force and moments on this type of wing are examined in detail using the inviscid CFD code results.
Vector Adaptive/Predictive Encoding Of Speech
NASA Technical Reports Server (NTRS)
Chen, Juin-Hwey; Gersho, Allen
1989-01-01
Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.
Adogu, O U; Ilika, A L
2006-12-01
Road traffic accidents (rtas) represent a major epidemic of non communicable disease in the country and has since escalated with the introduction of the new phenomenon of commercial motorcycle transportation such as is found in the two urban towns of nnewi and Awka of Anambra state, Nigeria. making use of a pre-tested, semi structured, interviewer administered questionnaire, relevant data on socio demographic and motorcycle characteristics were collected from a sample of commercial motorcyclists selected by systematic sampling technique. their knowledge of and attitude towards road traffic and safety codes were elicited. The result showed that the all-male commercial motorcyclists had a mean age of 30+8.9 years. one hundred and seventy six (32.6%) possessed good knowledge of road traffic codes and safety, while 35 (6.5%) exhibited good attitude towards them. both knowledge of and attitude towards traffic codes and safety improved with increase in educational level (p<0.005, p<0.001 respectively). the younger motorcyclists also possessed statistically significant better knowledge of traffic codes than their older counterparts (p<0.025). attitude to traffic codes and safety had no association with age of the motorcyclists (p>0.25). the study has provided useful information on the knowledge of and attitude towards road traffic and safety codes among commercial motorcyclists in nigeria. pursuit of knowledge through formal and informal education should run pari pasu with efforts to improve the nigerian economy in order to ensure a sustainable positive attitudinal change towards road traffic codes and safety among commercial motorcyclists.
Design and optimization of a portable LQCD Monte Carlo code using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Calore, Enrico; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele
The present panorama of HPC architectures is extremely heterogeneous, ranging from traditional multi-core CPU processors, supporting a wide class of applications but delivering moderate computing performance, to many-core Graphics Processor Units (GPUs), exploiting aggressive data-parallelism and delivering higher performances for streaming computing applications. In this scenario, code portability (and performance portability) become necessary for easy maintainability of applications; this is very relevant in scientific computing where code changes are very frequent, making it tedious and prone to error to keep different code versions aligned. In this work, we present the design and optimization of a state-of-the-art production-level LQCD Monte Carlo application, using the directive-based OpenACC programming model. OpenACC abstracts parallel programming to a descriptive level, relieving programmers from specifying how codes should be mapped onto the target architecture. We describe the implementation of a code fully written in OpenAcc, and show that we are able to target several different architectures, including state-of-the-art traditional CPUs and GPUs, with the same code. We also measure performance, evaluating the computing efficiency of our OpenACC code on several architectures, comparing with GPU-specific implementations and showing that a good level of performance-portability can be reached.
Han, Yaoqiang; Dang, Anhong; Ren, Yongxiong; Tang, Junxiong; Guo, Hong
2010-12-20
In free space optical communication (FSOC) systems, channel fading caused by atmospheric turbulence degrades the system performance seriously. However, channel coding combined with diversity techniques can be exploited to mitigate channel fading. In this paper, based on the experimental study of the channel fading effects, we propose to use turbo product code (TPC) as the channel coding scheme, which features good resistance to burst errors and no error floor. However, only channel coding cannot cope with burst errors caused by channel fading, interleaving is also used. We investigate the efficiency of interleaving for different interleaving depths, and then the optimum interleaving depth for TPC is also determined. Finally, an experimental study of TPC with interleaving is demonstrated, and we show that TPC with interleaving can significantly mitigate channel fading in FSOC systems.
Metasurfaced Reverberation Chamber.
Sun, Hengyi; Li, Zhuo; Gu, Changqing; Xu, Qian; Chen, Xinlei; Sun, Yunhe; Lu, Shengchen; Martin, Ferran
2018-01-25
The concept of metasurfaced reverberation chamber (RC) is introduced in this paper. It is shown that by coating the chamber wall with a rotating 1-bit random coding metasurface, it is possible to enlarge the test zone of the RC while maintaining the field uniformity as good as that in a traditional RC with mechanical stirrers. A 1-bit random coding diffusion metasurface is designed to obtain all-direction backscattering under normal incidence. Three specific cases are studied for comparisons, including a (traditional) mechanical stirrer RC, a mechanical stirrer RC with a fixed diffusion metasurface, and a RC with a rotating diffusion metasurface. Simulation results show that the compact rotating diffusion metasurface can act as a stirrer with good stirring efficiency. By using such rotating diffusion metasurface, the test region of the RC can be greatly extended.
Determination of performance of non-ideal aluminized explosives.
Keshavarz, Mohammad Hossein; Mofrad, Reza Teimuri; Poor, Karim Esmail; Shokrollahi, Arash; Zali, Abbas; Yousefi, Mohammad Hassan
2006-09-01
Non-ideal explosives can have Chapman-Jouguet (C-J) detonation pressure significantly different from those expected from existing thermodynamic computer codes, which usually allows finding the parameters of ideal detonation of individual high explosives with good accuracy. A simple method is introduced by which detonation pressure of non-ideal aluminized explosives with general formula C(a)H(b)N(c)O(d)Al(e) can be predicted only from a, b, c, d and e at any loading density without using any assumed detonation products and experimental data. Calculated detonation pressures show good agreement with experimental values with respect to computed results obtained by complicated computer code. It is shown here how loading density and atomic composition can be integrated into an empirical formula for predicting detonation pressure of proposed aluminized explosives.
NASA Astrophysics Data System (ADS)
Yan, Hui; Wang, K. G.; Jones, Jim E.
2016-06-01
A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.
Laminar Heating Validation of the OVERFLOW Code
NASA Technical Reports Server (NTRS)
Lillard, Randolph P.; Dries, Kevin M.
2005-01-01
OVERFLOW, a structured finite difference code, was applied to the solution of hypersonic laminar flow over several configurations assuming perfect gas chemistry. By testing OVERFLOW's capabilities over several configurations encompassing a variety of flow physics a validated laminar heating was produced. Configurations tested were a flat plate at 0 degrees incidence, a sphere, a compression ramp, and the X-38 re-entry vehicle. This variety of test cases shows the ability of the code to predict boundary layer flow, stagnation heating, laminar separation with re-attachment heating, and complex flow over a three-dimensional body. In addition, grid resolutions studies were done to give recommendations for the correct number of off-body points to be applied to generic problems and for wall-spacing values to capture heat transfer and skin friction. Numerical results show good comparison to the test data for all the configurations.
7 CFR 28.525 - Symbols and code numbers.
Code of Federal Regulations, 2013 CFR
2013-01-01
... designations in inches. (a) Symbols and Code numbers used for Color Grades of American Upland Cotton. Color... 41 Low Middling LM 51 Strict Good Ordinary SGO 61 Good Ordinary GO 71 Good Middling Light Spotted GM Lt SP 12 Strict Middling Light Spotted SM Lt Sp 22 Middling Light Spotted Mid Lt Sp 32 Strict Low...
7 CFR 28.525 - Symbols and code numbers.
Code of Federal Regulations, 2010 CFR
2010-01-01
... designations in inches. (a) Symbols and Code numbers used for Color Grades of American Upland Cotton. Color... 41 Low Middling LM 51 Strict Good Ordinary SGO 61 Good Ordinary GO 71 Good Middling Light Spotted GM Lt SP 12 Strict Middling Light Spotted SM Lt Sp 22 Middling Light Spotted Mid Lt Sp 32 Strict Low...
7 CFR 28.525 - Symbols and code numbers.
Code of Federal Regulations, 2014 CFR
2014-01-01
... designations in inches. (a) Symbols and Code numbers used for Color Grades of American Upland Cotton. Color... 41 Low Middling LM 51 Strict Good Ordinary SGO 61 Good Ordinary GO 71 Good Middling Light Spotted GM Lt SP 12 Strict Middling Light Spotted SM Lt Sp 22 Middling Light Spotted Mid Lt Sp 32 Strict Low...
7 CFR 28.525 - Symbols and code numbers.
Code of Federal Regulations, 2011 CFR
2011-01-01
... designations in inches. (a) Symbols and Code numbers used for Color Grades of American Upland Cotton. Color... 41 Low Middling LM 51 Strict Good Ordinary SGO 61 Good Ordinary GO 71 Good Middling Light Spotted GM Lt SP 12 Strict Middling Light Spotted SM Lt Sp 22 Middling Light Spotted Mid Lt Sp 32 Strict Low...
7 CFR 28.525 - Symbols and code numbers.
Code of Federal Regulations, 2012 CFR
2012-01-01
... designations in inches. (a) Symbols and Code numbers used for Color Grades of American Upland Cotton. Color... 41 Low Middling LM 51 Strict Good Ordinary SGO 61 Good Ordinary GO 71 Good Middling Light Spotted GM Lt SP 12 Strict Middling Light Spotted SM Lt Sp 22 Middling Light Spotted Mid Lt Sp 32 Strict Low...
Coded Excitation Plane Wave Imaging for Shear Wave Motion Detection
Song, Pengfei; Urban, Matthew W.; Manduca, Armando; Greenleaf, James F.; Chen, Shigao
2015-01-01
Plane wave imaging has greatly advanced the field of shear wave elastography thanks to its ultrafast imaging frame rate and the large field-of-view (FOV). However, plane wave imaging also has decreased penetration due to lack of transmit focusing, which makes it challenging to use plane waves for shear wave detection in deep tissues and in obese patients. This study investigated the feasibility of implementing coded excitation in plane wave imaging for shear wave detection, with the hypothesis that coded ultrasound signals can provide superior detection penetration and shear wave signal-to-noise-ratio (SNR) compared to conventional ultrasound signals. Both phase encoding (Barker code) and frequency encoding (chirp code) methods were studied. A first phantom experiment showed an approximate penetration gain of 2-4 cm for the coded pulses. Two subsequent phantom studies showed that all coded pulses outperformed the conventional short imaging pulse by providing superior sensitivity to small motion and robustness to weak ultrasound signals. Finally, an in vivo liver case study on an obese subject (Body Mass Index = 40) demonstrated the feasibility of using the proposed method for in vivo applications, and showed that all coded pulses could provide higher SNR shear wave signals than the conventional short pulse. These findings indicate that by using coded excitation shear wave detection, one can benefit from the ultrafast imaging frame rate and large FOV provided by plane wave imaging while preserving good penetration and shear wave signal quality, which is essential for obtaining robust shear elasticity measurements of tissue. PMID:26168181
Business Ethics and Your Organisation.
ERIC Educational Resources Information Center
Drummond, John
1990-01-01
Good ethics are good business. Top management should be committed to a code of ethics based on a true participative process. The organization should be willing to commit resources for training to ensure proper implementation of the code. (SK)
Unsteady Cascade Aerodynamic Response Using a Multiphysics Simulation Code
NASA Technical Reports Server (NTRS)
Lawrence, C.; Reddy, T. S. R.; Spyropoulos, E.
2000-01-01
The multiphysics code Spectrum(TM) is applied to calculate the unsteady aerodynamic pressures of oscillating cascade of airfoils representing a blade row of a turbomachinery component. Multiphysics simulation is based on a single computational framework for the modeling of multiple interacting physical phenomena, in the present case being between fluids and structures. Interaction constraints are enforced in a fully coupled manner using the augmented-Lagrangian method. The arbitrary Lagrangian-Eulerian method is utilized to account for deformable fluid domains resulting from blade motions. Unsteady pressures are calculated for a cascade designated as the tenth standard, and undergoing plunging and pitching oscillations. The predicted unsteady pressures are compared with those obtained from an unsteady Euler co-de refer-red in the literature. The Spectrum(TM) code predictions showed good correlation for the cases considered.
Recent advances in PDF modeling of turbulent reacting flows
NASA Technical Reports Server (NTRS)
Leonard, Andrew D.; Dai, F.
1995-01-01
This viewgraph presentation concludes that a Monte Carlo probability density function (PDF) solution successfully couples with an existing finite volume code; PDF solution method applied to turbulent reacting flows shows good agreement with data; and PDF methods must be run on parallel machines for practical use.
Channel coding for underwater acoustic single-carrier CDMA communication system
NASA Astrophysics Data System (ADS)
Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong
2017-01-01
CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.
NASA Astrophysics Data System (ADS)
Cunha, Diego M.; Tomal, Alessandra; Poletti, Martin E.
2013-04-01
In this work, the Monte Carlo (MC) code PENELOPE was employed for simulation of x-ray spectra in mammography and contrast-enhanced digital mammography (CEDM). Spectra for Mo, Rh and W anodes were obtained for tube potentials between 24-36 kV, for mammography, and between 45-49 kV, for CEDM. The spectra obtained from the simulations were analytically filtered to correspond to the anode/filter combinations usually employed in each technique (Mo/Mo, Rh/Rh and W/Rh for mammography and Mo/Cu, Rh/Cu and W/Cu for CEDM). For the Mo/Mo combination, the simulated spectra were compared with those obtained experimentally, and for spectra for the W anode, with experimental data from the literature, through comparison of distribution shape, average energies, half-value layers (HVL) and transmission curves. For all combinations evaluated, the simulated spectra were also compared with those provided by different models from the literature. Results showed that the code PENELOPE provides mammographic x-ray spectra in good agreement with those experimentally measured and those from the literature. The differences in the values of HVL ranged between 2-7%, for anode/filter combinations and tube potentials employed in mammography, and they were less than 5% for those employed in CEDM. The transmission curves for the spectra obtained also showed good agreement compared to those computed from reference spectra, with average relative differences less than 12% for mammography and CEDM. These results show that the code PENELOPE can be a useful tool to generate x-ray spectra for studies in mammography and CEDM, and also for evaluation of new x-ray tube designs and new anode materials.
Bandwidth efficient coding for satellite communications
NASA Technical Reports Server (NTRS)
Lin, Shu; Costello, Daniel J., Jr.; Miller, Warner H.; Morakis, James C.; Poland, William B., Jr.
1992-01-01
An error control coding scheme was devised to achieve large coding gain and high reliability by using coded modulation with reduced decoding complexity. To achieve a 3 to 5 dB coding gain and moderate reliability, the decoding complexity is quite modest. In fact, to achieve a 3 dB coding gain, the decoding complexity is quite simple, no matter whether trellis coded modulation or block coded modulation is used. However, to achieve coding gains exceeding 5 dB, the decoding complexity increases drastically, and the implementation of the decoder becomes very expensive and unpractical. The use is proposed of coded modulation in conjunction with concatenated (or cascaded) coding. A good short bandwidth efficient modulation code is used as the inner code and relatively powerful Reed-Solomon code is used as the outer code. With properly chosen inner and outer codes, a concatenated coded modulation scheme not only can achieve large coding gains and high reliability with good bandwidth efficiency but also can be practically implemented. This combination of coded modulation and concatenated coding really offers a way of achieving the best of three worlds, reliability and coding gain, bandwidth efficiency, and decoding complexity.
A Comparison of LBG and ADPCM Speech Compression Techniques
NASA Astrophysics Data System (ADS)
Bachu, Rajesh G.; Patel, Jignasa; Barkana, Buket D.
Speech compression is the technology of converting human speech into an efficiently encoded representation that can later be decoded to produce a close approximation of the original signal. In all speech there is a degree of predictability and speech coding techniques exploit this to reduce bit rates yet still maintain a suitable level of quality. This paper is a study and implementation of Linde-Buzo-Gray Algorithm (LBG) and Adaptive Differential Pulse Code Modulation (ADPCM) algorithms to compress speech signals. In here we implemented the methods using MATLAB 7.0. The methods we used in this study gave good results and performance in compressing the speech and listening tests showed that efficient and high quality coding is achieved.
Coded Cooperation for Multiway Relaying in Wireless Sensor Networks †
Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar
2015-01-01
Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels. PMID:26131675
Coded Cooperation for Multiway Relaying in Wireless Sensor Networks.
Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar
2015-06-29
Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels.
Error-correcting codes on scale-free networks
NASA Astrophysics Data System (ADS)
Kim, Jung-Hoon; Ko, Young-Jo
2004-06-01
We investigate the potential of scale-free networks as error-correcting codes. We find that irregular low-density parity-check codes with the highest performance known to date have degree distributions well fitted by a power-law function p (k) ˜ k-γ with γ close to 2, which suggests that codes built on scale-free networks with appropriate power exponents can be good error-correcting codes, with a performance possibly approaching the Shannon limit. We demonstrate for an erasure channel that codes with a power-law degree distribution of the form p (k) = C (k+α)-γ , with k⩾2 and suitable selection of the parameters α and γ , indeed have very good error-correction capabilities.
The association between patient-therapist MATRIX congruence and treatment outcome.
Mendlovic, Shlomo; Saad, Amit; Roll, Uri; Ben Yehuda, Ariel; Tuval-Mashiah, Rivka; Atzil-Slonim, Dana
2018-03-14
The present study aimed to examine the association between patient-therapist micro-level congruence/incongruence ratio and psychotherapeutic outcome. Nine good- and nine poor-outcome psychodynamic treatments (segregated by comparing pre- and post-treatment BDI-II) were analyzed (N = 18) moment by moment using the MATRIX (total number of MATRIX codes analyzed = 11,125). MATRIX congruence was defined as similar adjacent MATRIX codes. the congruence/incongruence ratio tended to increase as the treatment progressed only in good-outcome treatments. Progression of MATRIX codes' congruence/incongruence ratio is associated with good outcome of psychotherapy.
A simplified model for tritium permeation transient predictions when trapping is active*1
NASA Astrophysics Data System (ADS)
Longhurst, G. R.
1994-09-01
This report describes a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. Comparison calculations with the verified and validated TMAP4 transient code show good agreement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patra, Anirban; Tome, Carlos
This Milestone report shows good progress in interfacing VPSC with the FE codes ABAQUS and MOOSE, to perform component-level simulations of irradiation-induced deformation in Zirconium alloys. In this preliminary application, we have performed an irradiation growth simulation in the quarter geometry of a cladding tube. We have benchmarked VPSC-ABAQUS and VPSC-MOOSE predictions with VPSC-SA predictions to verify the accuracy of the VPSCFE interface. Predictions from the FE simulations are in general agreement with VPSC-SA simulations and also with experimental trends.
O'keefe, Matthew; Parr, Terence; Edgar, B. Kevin; ...
1995-01-01
Massively parallel processors (MPPs) hold the promise of extremely high performance that, if realized, could be used to study problems of unprecedented size and complexity. One of the primary stumbling blocks to this promise has been the lack of tools to translate application codes to MPP form. In this article we show how applications codes written in a subset of Fortran 77, called Fortran-P, can be translated to achieve good performance on several massively parallel machines. This subset can express codes that are self-similar, where the algorithm applied to the global data domain is also applied to each subdomain. Wemore » have found many codes that match the Fortran-P programming style and have converted them using our tools. We believe a self-similar coding style will accomplish what a vectorizable style has accomplished for vector machines by allowing the construction of robust, user-friendly, automatic translation systems that increase programmer productivity and generate fast, efficient code for MPPs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dearing, J F; Nelson, W R; Rose, S D
Computational thermal-hydraulic models of a 19-pin, electrically heated, wire-wrap liquid-metal fast breeder reactor test bundle were developed using two well-known subchannel analysis codes, COBRA III-C and SABRE-1 (wire-wrap version). These two codes use similar subchannel control volumes for the finite difference conservation equations but vary markedly in solution strategy and modeling capability. In particular, the empirical wire-wrap-forced diversion crossflow models are different. Surprisingly, however, crossflow velocity predictions of the two codes are very similar. Both codes show generally good agreement with experimental temperature data from a test in which a large radial temperature gradient was imposed. Differences between data andmore » code results are probably caused by experimental pin bowing, which is presently the limiting factor in validating coded empirical models.« less
NASA Astrophysics Data System (ADS)
Lu, Li; Sheng, Wen; Liu, Shihua; Zhang, Xianzhi
2014-10-01
The ballistic missile hyperspectral data of imaging spectrometer from the near-space platform are generated by numerical method. The characteristic of the ballistic missile hyperspectral data is extracted and matched based on two different kinds of algorithms, which called transverse counting and quantization coding, respectively. The simulation results show that two algorithms extract the characteristic of ballistic missile adequately and accurately. The algorithm based on the transverse counting has the low complexity and can be implemented easily compared to the algorithm based on the quantization coding does. The transverse counting algorithm also shows the good immunity to the disturbance signals and speed up the matching and recognition of subsequent targets.
SOLAR OPACITY CALCULATIONS USING THE SUPER-TRANSITION-ARRAY METHOD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krief, M.; Feigel, A.; Gazit, D., E-mail: menahem.krief@mail.huji.ac.il
A new opacity model has been developed based on the Super-Transition-Array (STA) method for the calculation of monochromatic opacities of plasmas in local thermodynamic equilibrium. The atomic code, named STAR (STA-Revised), is described and used to calculate spectral opacities for a solar model implementing the recent AGSS09 composition. Calculations are carried out throughout the solar radiative zone. The relative contributions of different chemical elements and atomic processes to the total Rosseland mean opacity are analyzed in detail. Monochromatic opacities and charge-state distributions are compared with the widely used Opacity Project (OP) code, for several elements near the radiation–convection interface. STARmore » Rosseland opacities for the solar mixture show a very good agreement with OP and the OPAL opacity code throughout the radiation zone. Finally, an explicit STA calculation was performed of the full AGSS09 photospheric mixture, including all heavy metals. It was shown that, due to their extremely low abundance, and despite being very good photon absorbers, the heavy elements do not affect the Rosseland opacity.« less
Yang, Jia Ji; Cheng, Yong Zhi; Ge, Chen Chen; Gong, Rong Zhou
2018-04-19
A class of linear polarization conversion coding metasurfaces (MSs) based on a metal cut-wire structure is proposed, which can be applied to the reduction properties of radar cross section (RCS). We firstly present a hypothesis based on the principle of planar array theory, and then verify the RCS reduction characteristics using linear polarization conversion coding MSs by simulations and experiments. The simulated results show that in the frequency range of 6⁻14 GHz, the linear polarization conversion ratio reaches a maximum value of 90%, which is in good agreement with the theoretical predictions. For normal incident x - and y -polarized waves, RCS reduction of designed coding MSs 01/01 and 01/10 is essentially more than 10 dB in the above-mentioned frequency range. We prepare and measure the 01/10 coding MS sample, and find that the experimental results in terms of reflectance and RCS reduction are in good agreement with the simulated ones under normal incidence. In addition, under oblique incidence, RCS reduction is suppressed as the angle of incidence increases, but still exhibits RCS reduction effects in a certain frequency range. The designed MS is expected to have valuable potential in applications for stealth field technology.
Startsev, N; Dimov, P; Grosche, B; Tretyakov, F; Schüz, J; Akleyev, A
2015-01-01
To follow up populations exposed to several radiation accidents in the Southern Urals, a cause-of-death registry was established at the Urals Center capturing deaths in the Chelyabinsk, Kurgan and Sverdlovsk region since 1950. When registering deaths over such a long time period, quality measures need to be in place to maintain quality and reduce the impact of individual coders as well as quality changes in death certificates. To ensure the uniformity of coding, a method for semi-automatic coding was developed, which is described here. Briefly, the method is based on a dynamic thesaurus, database-supported coding and parallel coding by two different individuals. A comparison of the proposed method for organizing the coding process with the common procedure of coding showed good agreement, with, at the end of the coding process, 70 - 90% agreement for the three-digit ICD -9 rubrics. The semi-automatic method ensures a sufficiently high quality of coding by at the same time providing an opportunity to reduce the labor intensity inherent in the creation of large-volume cause-of-death registries.
Fuel burnup analysis for IRIS reactor using MCNPX and WIMS-D5 codes
NASA Astrophysics Data System (ADS)
Amin, E. A.; Bashter, I. I.; Hassan, Nabil M.; Mustafa, S. S.
2017-02-01
International Reactor Innovative and Secure (IRIS) reactor is a compact power reactor designed with especial features. It contains Integral Fuel Burnable Absorber (IFBA). The core is heterogeneous both axially and radially. This work provides the full core burn up analysis for IRIS reactor using MCNPX and WIMDS-D5 codes. Criticality calculations, radial and axial power distributions and nuclear peaking factor at the different stages of burnup were studied. Effective multiplication factor values for the core were estimated by coupling MCNPX code with WIMS-D5 code and compared with SAS2H/KENO-V code values at different stages of burnup. The two calculation codes show good agreement and correlation. The values of radial and axial powers for the full core were also compared with published results given by SAS2H/KENO-V code (at the beginning and end of reactor operation). The behavior of both radial and axial power distribution is quiet similar to the other data published by SAS2H/KENO-V code. The peaking factor values estimated in the present work are close to its values calculated by SAS2H/KENO-V code.
Bar code, good for industry and trade--how does it benefit the dentist?
Oehlmann, H
2001-10-01
Every dentist who attentively follows the change in product labelling can easily see that the HIBC bar code is on the increase. In fact, according to information from FIDE/VDDI and ADE/BVD, the dental industry and trade are firmly resolved to apply the HIBC bar code to all products used internationally in dental practices. Why? Indeed, at first it looks like extra expense to additionally print a bar code on the packages. Good reasons can only lie in advantages which manufacturers and the trade expect from the HIBC bar code, Indications in dental technician circles are that the HIBC bar code is coming. If there are advantages, what are these, and can the dentist also profit from them? What does HIBC bar code mean and what items of interest does it include? What does bar code cost and does only one code exist? This is explained briefly, concentrating on the benefits bar code can bring for different users.
Validation of OpenFoam for heavy gas dispersion applications.
Mack, A; Spruijt, M P N
2013-11-15
In the present paper heavy gas dispersion calculations were performed with OpenFoam. For a wind tunnel test case, numerical data was validated with experiments. For a full scale numerical experiment, a code to code comparison was performed with numerical results obtained from Fluent. The validation was performed in a gravity driven environment (slope), where the heavy gas induced the turbulence. For the code to code comparison, a hypothetical heavy gas release into a strongly turbulent atmospheric boundary layer including terrain effects was selected. The investigations were performed for SF6 and CO2 as heavy gases applying the standard k-ɛ turbulence model. A strong interaction of the heavy gas with the turbulence is present which results in a strong damping of the turbulence and therefore reduced heavy gas mixing. Especially this interaction, based on the buoyancy effects, was studied in order to ensure that the turbulence-buoyancy coupling is the main driver for the reduced mixing and not the global behaviour of the turbulence modelling. For both test cases, comparisons were performed between OpenFoam and Fluent solutions which were mainly in good agreement with each other. Beside steady state solutions, the time accuracy was investigated. In the low turbulence environment (wind tunnel test) which for both codes (laminar solutions) was in good agreement, also with the experimental data. The turbulent solutions of OpenFoam were in much better agreement with the experimental results than the Fluent solutions. Within the strong turbulence environment, both codes showed an excellent comparability. Copyright © 2013 Elsevier B.V. All rights reserved.
Shielding evaluation for solar particle events using MCNPX, PHITS and OLTARIS codes
NASA Astrophysics Data System (ADS)
Aghara, S. K.; Sriprisan, S. I.; Singleterry, R. C.; Sato, T.
2015-01-01
Detailed analyses of Solar Particle Events (SPE) were performed to calculate primary and secondary particle spectra behind aluminum, at various thicknesses in water. The simulations were based on Monte Carlo (MC) radiation transport codes, MCNPX 2.7.0 and PHITS 2.64, and the space radiation analysis website called OLTARIS (On-Line Tool for the Assessment of Radiation in Space) version 3.4 (uses deterministic code, HZETRN, for transport). The study is set to investigate the impact of SPEs spectra transporting through 10 or 20 g/cm2 Al shield followed by 30 g/cm2 of water slab. Four historical SPE events were selected and used as input source spectra particle differential spectra for protons, neutrons, and photons are presented. The total particle fluence as a function of depth is presented. In addition to particle flux, the dose and dose equivalent values are calculated and compared between the codes and with the other published results. Overall, the particle fluence spectra from all three codes show good agreement with the MC codes showing closer agreement compared to the OLTARIS results. The neutron particle fluence from OLTARIS is lower than the results from MC codes at lower energies (E < 100 MeV). Based on mean square difference analysis the results from MCNPX and PHITS agree better for fluence, dose and dose equivalent when compared to OLTARIS results.
Modeling of boron species in the Falcon 17 and ISP-34 integral tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lazaridis, M.; Capitao, J.A.; Drossinos, Y.
1996-09-01
The RAFT computer code for aerosol formation and transport was modified to include boron species in its chemical database. The modification was necessary to calculate fission product transport and deposition in the FAL-17 and ISP-34 Falcon tests, where boric acid was injected. The experimental results suggest that the transport of cesium is modified in the presence of boron. The results obtained with the modified RAFT code are presented; they show good agreement with experimental results for cesium and partial agreement for boron deposition in the Falcon silica tube. The new version of the RAFT code predicts the same behavior formore » iodine deposition as the previous version, where boron species were not included.« less
Thanh, Tran Thien; Vuong, Le Quang; Ho, Phan Long; Chuong, Huynh Dinh; Nguyen, Vo Hoang; Tao, Chau Van
2018-04-01
In this work, an advanced analytical procedure was applied to calculate radioactivity in spiked water samples in a close geometry gamma spectroscopy. It included MCNP-CP code in order to calculate the coincidence summing correction factor (CSF). The CSF results were validated by a deterministic method using ETNA code for both p-type HPGe detectors. It showed that a good agreement for both codes. Finally, the validity of the developed procedure was confirmed by a proficiency test to calculate the activities of various radionuclides. The results of the radioactivity measurement with both detectors using the advanced analytical procedure were received the ''Accepted'' statuses following the proficiency test. Copyright © 2018 Elsevier Ltd. All rights reserved.
Design of orbital debris shields for oblique hypervelocity impact
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.
1994-01-01
A new impact debris propagation code was written to link CTH simulations of space debris shield perforation to the Lagrangian finite element code DYNA3D, for space structure wall impact simulations. This software (DC3D) simulates debris cloud evolution using a nonlinear elastic-plastic deformable particle dynamics model, and renders computationally tractable the supercomputer simulation of oblique impacts on Whipple shield protected structures. Comparison of three dimensional, oblique impact simulations with experimental data shows good agreement over a range of velocities of interest in the design of orbital debris shielding. Source code developed during this research is provided on the enclosed floppy disk. An abstract based on the work described was submitted to the 1994 Hypervelocity Impact Symposium.
NASA Astrophysics Data System (ADS)
Esfandiari, M.; Shirmardi, S. P.; Medhat, M. E.
2014-06-01
In this study, element analysis and the mass attenuation coefficient for matrixes of gold, bronze and water with various impurities and the concentrations of heavy metals (Cu, Mn, Pb and Zn) are evaluated and calculated by the MCNP simulation code for photons emitted from Barium-133, Americium-241 and sources with energies between 1 and 100 keV. The MCNP data are compared with the experimental data and WinXCom code simulated results by Medhat. The results showed that the obtained results of bronze and gold matrix are in good agreement with the other methods for energies above 40 and 60 keV, respectively. However for water matrixes with various impurities, there is a good agreement between the three methods MCNP, WinXCom and the experimental one in low and high energies.
Axisymmetric Plume Simulations with NASA's DSMC Analysis Code
NASA Technical Reports Server (NTRS)
Stewart, B. D.; Lumpkin, F. E., III
2012-01-01
A comparison of axisymmetric Direct Simulation Monte Carlo (DSMC) Analysis Code (DAC) results to analytic and Computational Fluid Dynamics (CFD) solutions in the near continuum regime and to 3D DAC solutions in the rarefied regime for expansion plumes into a vacuum is performed to investigate the validity of the newest DAC axisymmetric implementation. This new implementation, based on the standard DSMC axisymmetric approach where the representative molecules are allowed to move in all three dimensions but are rotated back to the plane of symmetry by the end of the move step, has been fully integrated into the 3D-based DAC code and therefore retains all of DAC s features, such as being able to compute flow over complex geometries and to model chemistry. Axisymmetric DAC results for a spherically symmetric isentropic expansion are in very good agreement with a source flow analytic solution in the continuum regime and show departure from equilibrium downstream of the estimated breakdown location. Axisymmetric density contours also compare favorably against CFD results for the R1E thruster while temperature contours depart from equilibrium very rapidly away from the estimated breakdown surface. Finally, axisymmetric and 3D DAC results are in very good agreement over the entire plume region and, as expected, this new axisymmetric implementation shows a significant reduction in computer resources required to achieve accurate simulations for this problem over the 3D simulations.
Kwag, Jeehyun; Jang, Hyun Jae; Kim, Mincheol; Lee, Sujeong
2014-01-01
Rate and phase codes are believed to be important in neural information processing. Hippocampal place cells provide a good example where both coding schemes coexist during spatial information processing. Spike rate increases in the place field, whereas spike phase precesses relative to the ongoing theta oscillation. However, what intrinsic mechanism allows for a single neuron to generate spike output patterns that contain both neural codes is unknown. Using dynamic clamp, we simulate an in vivo-like subthreshold dynamics of place cells to in vitro CA1 pyramidal neurons to establish an in vitro model of spike phase precession. Using this in vitro model, we show that membrane potential oscillation (MPO) dynamics is important in the emergence of spike phase codes: blocking the slowly activating, non-inactivating K+ current (IM), which is known to control subthreshold MPO, disrupts MPO and abolishes spike phase precession. We verify the importance of adaptive IM in the generation of phase codes using both an adaptive integrate-and-fire and a Hodgkin–Huxley (HH) neuron model. Especially, using the HH model, we further show that it is the perisomatically located IM with slow activation kinetics that is crucial for the generation of phase codes. These results suggest an important functional role of IM in single neuron computation, where IM serves as an intrinsic mechanism allowing for dual rate and phase coding in single neurons. PMID:25100320
A Comparison of Three PML Treatments for CAA (and CFD)
NASA Technical Reports Server (NTRS)
Goodrich, John W.
2008-01-01
In this paper we compare three Perfectly Matched Layer (PML) treatments by means of a series of numerical experiments, using common numerical algorithms, computational grids, and code implementations. These comparisons are with the Linearized Euler Equations, for base uniform base flow. We see that there are two very good PML candidates, and that can both control the introduced error. Furthermore, we also show that corners can be handled with essentially no increase in the introduced error, and that with a good PML, the outer boundary is the most significant source of err
GAME: GAlaxy Machine learning for Emission lines
NASA Astrophysics Data System (ADS)
Ucci, G.; Ferrara, A.; Pallottini, A.; Gallerani, S.
2018-06-01
We present an updated, optimized version of GAME (GAlaxy Machine learning for Emission lines), a code designed to infer key interstellar medium physical properties from emission line intensities of ultraviolet /optical/far-infrared galaxy spectra. The improvements concern (a) an enlarged spectral library including Pop III stars, (b) the inclusion of spectral noise in the training procedure, and (c) an accurate evaluation of uncertainties. We extensively validate the optimized code and compare its performance against empirical methods and other available emission line codes (PYQZ and HII-CHI-MISTRY) on a sample of 62 SDSS stacked galaxy spectra and 75 observed HII regions. Very good agreement is found for metallicity. However, ionization parameters derived by GAME tend to be higher. We show that this is due to the use of too limited libraries in the other codes. The main advantages of GAME are the simultaneous use of all the measured spectral lines and the extremely short computational times. We finally discuss the code potential and limitations.
Oh, Chang Seok; Lee, Soong Deok; Kim, Yi-Suk; Shin, Dong Hoon
2015-01-01
Previous study showed that East Asian mtDNA haplogroups, especially those of Koreans, could be successfully assigned by the coupled use of analyses on coding region SNP markers and control region mutation motifs. In this study, we tried to see if the same triple multiplex analysis for coding regions SNPs could be also applicable to ancient samples from East Asia as the complementation for sequence analysis of mtDNA control region. By the study on Joseon skeleton samples, we know that mtDNA haplogroup determined by coding region SNP markers successfully falls within the same haplogroup that sequence analysis on control region can assign. Considering that ancient samples in previous studies make no small number of errors in control region mtDNA sequencing, coding region SNP analysis can be used as good complimentary to the conventional haplogroup determination, especially of archaeological human bone samples buried underground over long periods. PMID:26345190
NESSY: NLTE spectral synthesis code for solar and stellar atmospheres
NASA Astrophysics Data System (ADS)
Tagirov, R. V.; Shapiro, A. I.; Schmutz, W.
2017-07-01
Context. Physics-based models of solar and stellar magnetically-driven variability are based on the calculation of synthetic spectra for various surface magnetic features as well as quiet regions, which are a function of their position on the solar or stellar disc. Such calculations are performed with radiative transfer codes tailored for modeling broad spectral intervals. Aims: We aim to present the NLTE Spectral SYnthesis code (NESSY), which can be used for modeling of the entire (UV-visible-IR and radio) spectra of solar and stellar magnetic features and quiet regions. Methods: NESSY is a further development of the COde for Solar Irradiance (COSI), in which we have implemented an accelerated Λ-iteration (ALI) scheme for co-moving frame (CMF) line radiation transfer based on a new estimate of the local approximate Λ-operator. Results: We show that the new version of the code performs substantially faster than the previous one and yields a reliable calculation of the entire solar spectrum. This calculation is in a good agreement with the available observations.
NASA Astrophysics Data System (ADS)
Castiglioni, Giacomo
Flows over airfoils and blades in rotating machinery, for unmanned and micro-aerial vehicles, wind turbines, and propellers consist of a laminar boundary layer near the leading edge that is often followed by a laminar separation bubble and transition to turbulence further downstream. Typical Reynolds averaged Navier-Stokes turbulence models are inadequate for such flows. Direct numerical simulation is the most reliable, but is also the most computationally expensive alternative. This work assesses the capability of immersed boundary methods and large eddy simulations to reduce the computational requirements for such flows and still provide high quality results. Two-dimensional and three-dimensional simulations of a laminar separation bubble on a NACA-0012 airfoil at Rec = 5x104 and at 5° of incidence have been performed with an immersed boundary code and a commercial code using body fitted grids. Several sub-grid scale models have been implemented in both codes and their performance evaluated. For the two-dimensional simulations with the immersed boundary method the results show good agreement with the direct numerical simulation benchmark data for the pressure coefficient Cp and the friction coefficient Cf, but only when using dissipative numerical schemes. There is evidence that this behavior can be attributed to the ability of dissipative schemes to damp numerical noise coming from the immersed boundary. For the three-dimensional simulations the results show a good prediction of the separation point, but an inaccurate prediction of the reattachment point unless full direct numerical simulation resolution is used. The commercial code shows good agreement with the direct numerical simulation benchmark data in both two and three-dimensional simulations, but the presence of significant, unquantified numerical dissipation prevents a conclusive assessment of the actual prediction capabilities of very coarse large eddy simulations with low order schemes in general cases. Additionally, a two-dimensional sweep of angles of attack from 0° to 5° is performed showing a qualitative prediction of the jump in lift and drag coefficients due to the appearance of the laminar separation bubble. The numerical dissipation inhibits the predictive capabilities of large eddy simulations whenever it is of the same order of magnitude or larger than the sub-grid scale dissipation. The need to estimate the numerical dissipation is most pressing for low-order methods employed by commercial computational fluid dynamics codes. Following the recent work of Schranner et al., the equations and procedure for estimating the numerical dissipation rate and the numerical viscosity in a commercial code are presented. The method allows for the computation of the numerical dissipation rate and numerical viscosity in the physical space for arbitrary sub-domains in a self-consistent way, using only information provided by the code in question. The method is first tested for a three-dimensional Taylor-Green vortex flow in a simple cubic domain and compared with benchmark results obtained using an accurate, incompressible spectral solver. Afterwards the same procedure is applied for the first time to a realistic flow configuration, specifically to the above discussed laminar separation bubble flow over a NACA 0012 airfoil. The method appears to be quite robust and its application reveals that for the code and the flow in question the numerical dissipation can be significantly larger than the viscous dissipation or the dissipation of the classical Smagorinsky sub-grid scale model, confirming the previously qualitative finding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clovas, A.; Zanthos, S.; Antonopoulos-Domis, M.
2000-03-01
The dose rate conversion factors {dot D}{sub CF} (absorbed dose rate in air per unit activity per unit of soil mass, nGy h{sup {minus}1} per Bq kg{sup {minus}1}) are calculated 1 m above ground for photon emitters of natural radionuclides uniformly distributed in the soil. Three Monte Carlo codes are used: (1) The MCNP code of Los Alamos; (2) The GEANT code of CERN; and (3) a Monte Carlo code developed in the Nuclear Technology Laboratory of the Aristotle University of Thessaloniki. The accuracy of the Monte Carlo results is tested by the comparison of the unscattered flux obtained bymore » the three Monte Carlo codes with an independent straightforward calculation. All codes and particularly the MCNP calculate accurately the absorbed dose rate in air due to the unscattered radiation. For the total radiation (unscattered plus scattered) the {dot D}{sub CF} values calculated from the three codes are in very good agreement between them. The comparison between these results and the results deduced previously by other authors indicates a good agreement (less than 15% of difference) for photon energies above 1,500 keV. Antithetically, the agreement is not as good (difference of 20--30%) for the low energy photons.« less
Computing Challenges in Coded Mask Imaging
NASA Technical Reports Server (NTRS)
Skinner, Gerald
2009-01-01
This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.
The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grebe, A.; Leveling, A.; Lu, T.
The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay gamma-quanta by the residuals in the activated structures and scoring the prompt doses of these gamma-quanta at arbitrary distances frommore » those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and showed a good agreement. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.« less
NASA Technical Reports Server (NTRS)
Schmidt, James F.
1995-01-01
An off-design axial-flow compressor code is presented and is available from COSMIC for predicting the aerodynamic performance maps of fans and compressors. Steady axisymmetric flow is assumed and the aerodynamic solution reduces to solving the two-dimensional flow field in the meridional plane. A streamline curvature method is used for calculating this flow-field outside the blade rows. This code allows for bleed flows and the first five stators can be reset for each rotational speed, capabilities which are necessary for large multistage compressors. The accuracy of the off-design performance predictions depend upon the validity of the flow loss and deviation correlation models. These empirical correlations for the flow loss and deviation are used to model the real flow effects and the off-design code will compute through small reverse flow regions. The input to this off-design code is fully described and a user's example case for a two-stage fan is included with complete input and output data sets. Also, a comparison of the off-design code predictions with experimental data is included which generally shows good agreement.
Heat simulation via Scilab programming
NASA Astrophysics Data System (ADS)
Hasan, Mohammad Khatim; Sulaiman, Jumat; Karim, Samsul Arifin Abdul
2014-07-01
This paper discussed the used of an open source sofware called Scilab to develop a heat simulator. In this paper, heat equation was used to simulate heat behavior in an object. The simulator was developed using finite difference method. Numerical experiment output show that Scilab can produce a good heat behavior simulation with marvellous visual output with only developing simple computer code.
37 CFR 1.103 - Suspension of action by the Office.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Code. A request for deferral of examination under this paragraph must include the publication fee set... include: (1) A showing of good and sufficient cause for suspension of action; and (2) The fee set forth in... the period of suspension, and include the processing fee set forth in § 1.17(i). (c) Limited...
Verification of MCNP simulation of neutron flux parameters at TRIGA MK II reactor of Malaysia.
Yavar, A R; Khalafi, H; Kasesaz, Y; Sarmani, S; Yahaya, R; Wood, A K; Khoo, K S
2012-10-01
A 3-D model for 1 MW TRIGA Mark II research reactor was simulated. Neutron flux parameters were calculated using MCNP-4C code and were compared with experimental results obtained by k(0)-INAA and absolute method. The average values of φ(th),φ(epi), and φ(fast) by MCNP code were (2.19±0.03)×10(12) cm(-2)s(-1), (1.26±0.02)×10(11) cm(-2)s(-1) and (3.33±0.02)×10(10) cm(-2)s(-1), respectively. These average values were consistent with the experimental results obtained by k(0)-INAA. The findings show a good agreement between MCNP code results and experimental results. Copyright © 2012 Elsevier Ltd. All rights reserved.
A novel quantum LSB-based steganography method using the Gray code for colored quantum images
NASA Astrophysics Data System (ADS)
Heidari, Shahrokh; Farzadnia, Ehsan
2017-10-01
As one of the prevalent data-hiding techniques, steganography is defined as the act of concealing secret information in a cover multimedia encompassing text, image, video and audio, imperceptibly, in order to perform interaction between the sender and the receiver in which nobody except the receiver can figure out the secret data. In this approach a quantum LSB-based steganography method utilizing the Gray code for quantum RGB images is investigated. This method uses the Gray code to accommodate two secret qubits in 3 LSBs of each pixel simultaneously according to reference tables. Experimental consequences which are analyzed in MATLAB environment, exhibit that the present schema shows good performance and also it is more secure and applicable than the previous one currently found in the literature.
Vaerenberg, Bart; Péan, Vincent; Lesbros, Guillaume; De Ceulaer, Geert; Schauwers, Karen; Daemers, Kristin; Gnansia, Dan; Govaerts, Paul J
2013-06-01
To assess the auditory performance of Digisonic(®) cochlear implant users with electric stimulation (ES) and electro-acoustic stimulation (EAS) with special attention to the processing of low-frequency temporal fine structure. Six patients implanted with a Digisonic(®) SP implant and showing low-frequency residual hearing were fitted with the Zebra(®) speech processor providing both electric and acoustic stimulation. Assessment consisted of monosyllabic speech identification tests in quiet and in noise at different presentation levels, and a pitch discrimination task using harmonic and disharmonic intonating complex sounds ( Vaerenberg et al., 2011 ). These tests investigate place and time coding through pitch discrimination. All tasks were performed with ES only and with EAS. Speech results in noise showed significant improvement with EAS when compared to ES. Whereas EAS did not yield better results in the harmonic intonation test, the improvements in the disharmonic intonation test were remarkable, suggesting better coding of pitch cues requiring phase locking. These results suggest that patients with residual hearing in the low-frequency range still have good phase-locking capacities, allowing them to process fine temporal information. ES relies mainly on place coding but provides poor low-frequency temporal coding, whereas EAS also provides temporal coding in the low-frequency range. Patients with residual phase-locking capacities can make use of these cues.
Cornelius, Iwan; Guatelli, Susanna; Fournier, Pauline; Crosbie, Jeffrey C; Sanchez Del Rio, Manuel; Bräuer-Krisch, Elke; Rosenfeld, Anatoly; Lerch, Michael
2014-05-01
Microbeam radiation therapy (MRT) is a synchrotron-based radiotherapy modality that uses high-intensity beams of spatially fractionated radiation to treat tumours. The rapid evolution of MRT towards clinical trials demands accurate treatment planning systems (TPS), as well as independent tools for the verification of TPS calculated dose distributions in order to ensure patient safety and treatment efficacy. Monte Carlo computer simulation represents the most accurate method of dose calculation in patient geometries and is best suited for the purpose of TPS verification. A Monte Carlo model of the ID17 biomedical beamline at the European Synchrotron Radiation Facility has been developed, including recent modifications, using the Geant4 Monte Carlo toolkit interfaced with the SHADOW X-ray optics and ray-tracing libraries. The code was benchmarked by simulating dose profiles in water-equivalent phantoms subject to irradiation by broad-beam (without spatial fractionation) and microbeam (with spatial fractionation) fields, and comparing against those calculated with a previous model of the beamline developed using the PENELOPE code. Validation against additional experimental dose profiles in water-equivalent phantoms subject to broad-beam irradiation was also performed. Good agreement between codes was observed, with the exception of out-of-field doses and toward the field edge for larger field sizes. Microbeam results showed good agreement between both codes and experimental results within uncertainties. Results of the experimental validation showed agreement for different beamline configurations. The asymmetry in the out-of-field dose profiles due to polarization effects was also investigated, yielding important information for the treatment planning process in MRT. This work represents an important step in the development of a Monte Carlo-based independent verification tool for treatment planning in MRT.
Real-time visual simulation of APT system based on RTW and Vega
NASA Astrophysics Data System (ADS)
Xiong, Shuai; Fu, Chengyu; Tang, Tao
2012-10-01
The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.
Numerical optimization of three-dimensional coils for NSTX-U
Lazerson, S. A.; Park, J. -K.; Logan, N.; ...
2015-09-03
A tool for the calculation of optimal three-dimensional (3D) perturbative magnetic fields in tokamaks has been developed. The IPECOPT code builds upon the stellarator optimization code STELLOPT to allow for optimization of linear ideal magnetohydrodynamic perturbed equilibrium (IPEC). This tool has been applied to NSTX-U equilibria, addressing which fields are the most effective at driving NTV torques. The NTV torque calculation is performed by the PENT code. Optimization of the normal field spectrum shows that fields with n = 1 character can drive a large core torque. It is also shown that fields with n = 3 features are capablemore » of driving edge torque and some core torque. Coil current optimization (using the planned in-vessel and existing RWM coils) on NSTX-U suggest the planned coils set is adequate for core and edge torque control. In conclusion, comparison between error field correction experiments on DIII-D and the optimizer show good agreement.« less
Shielding evaluation for solar particle events using MCNPX, PHITS and OLTARIS codes.
Aghara, S K; Sriprisan, S I; Singleterry, R C; Sato, T
2015-01-01
Detailed analyses of Solar Particle Events (SPE) were performed to calculate primary and secondary particle spectra behind aluminum, at various thicknesses in water. The simulations were based on Monte Carlo (MC) radiation transport codes, MCNPX 2.7.0 and PHITS 2.64, and the space radiation analysis website called OLTARIS (On-Line Tool for the Assessment of Radiation in Space) version 3.4 (uses deterministic code, HZETRN, for transport). The study is set to investigate the impact of SPEs spectra transporting through 10 or 20 g/cm(2) Al shield followed by 30 g/cm(2) of water slab. Four historical SPE events were selected and used as input source spectra particle differential spectra for protons, neutrons, and photons are presented. The total particle fluence as a function of depth is presented. In addition to particle flux, the dose and dose equivalent values are calculated and compared between the codes and with the other published results. Overall, the particle fluence spectra from all three codes show good agreement with the MC codes showing closer agreement compared to the OLTARIS results. The neutron particle fluence from OLTARIS is lower than the results from MC codes at lower energies (E<100 MeV). Based on mean square difference analysis the results from MCNPX and PHITS agree better for fluence, dose and dose equivalent when compared to OLTARIS results. Copyright © 2015 The Committee on Space Research (COSPAR). All rights reserved.
Kotchenova, Svetlana Y; Vermote, Eric F
2007-07-10
This is the second part of the validation effort of the recently developed vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), primarily used for the calculation of look-up tables in the Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric correction algorithm. The 6SV1 code was tested against a Monte Carlo code and Coulson's tabulated values for molecular and aerosol atmospheres bounded by different Lambertian and anisotropic surfaces. The code was also tested in scalar mode against the scalar code SHARM to resolve the previous 6S accuracy issues in the case of an anisotropic surface. All test cases were characterized by good agreement between the 6SV1 and the other codes: The overall relative error did not exceed 0.8%. The study also showed that ignoring the effects of radiation polarization in the atmosphere led to large errors in the simulated top-of-atmosphere reflectances: The maximum observed error was approximately 7.2% for both Lambertian and anisotropic surfaces.
NASA Astrophysics Data System (ADS)
Kotchenova, Svetlana Y.; Vermote, Eric F.
2007-07-01
This is the second part of the validation effort of the recently developed vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), primarily used for the calculation of look-up tables in the Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric correction algorithm. The 6SV1 code was tested against a Monte Carlo code and Coulson's tabulated values for molecular and aerosol atmospheres bounded by different Lambertian and anisotropic surfaces. The code was also tested in scalar mode against the scalar code SHARM to resolve the previous 6S accuracy issues in the case of an anisotropic surface. All test cases were characterized by good agreement between the 6SV1 and the other codes: The overall relative error did not exceed 0.8%. The study also showed that ignoring the effects of radiation polarization in the atmosphere led to large errors in the simulated top-of-atmosphere reflectances: The maximum observed error was approximately 7.2% for both Lambertian and anisotropic surfaces.
Verification and Validation of the k-kL Turbulence Model in FUN3D and CFL3D Codes
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Carlson, Jan-Renee; Rumsey, Christopher L.
2015-01-01
The implementation of the k-kL turbulence model using multiple computational uid dy- namics (CFD) codes is reported herein. The k-kL model is a two-equation turbulence model based on Abdol-Hamid's closure and Menter's modi cation to Rotta's two-equation model. Rotta shows that a reliable transport equation can be formed from the turbulent length scale L, and the turbulent kinetic energy k. Rotta's equation is well suited for term-by-term mod- eling and displays useful features compared to other two-equation models. An important di erence is that this formulation leads to the inclusion of higher-order velocity derivatives in the source terms of the scale equations. This can enhance the ability of the Reynolds- averaged Navier-Stokes (RANS) solvers to simulate unsteady ows. The present report documents the formulation of the model as implemented in the CFD codes Fun3D and CFL3D. Methodology, veri cation and validation examples are shown. Attached and sepa- rated ow cases are documented and compared with experimental data. The results show generally very good comparisons with canonical and experimental data, as well as matching results code-to-code. The results from this formulation are similar or better than results using the SST turbulence model.
NASA Astrophysics Data System (ADS)
Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui
2016-09-01
In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.
The global public good concept: a means of promoting good veterinary governance.
Eloit, M
2012-08-01
At the outset, the concept of a 'public good' was associated with economic policies. However, it has now evolved not only from a national to a global concept (global public good), but also from a concept applying solely to the production of goods to one encompassing societal issues (education, environment, etc.) and fundamental rights, including the right to health and food. Through their actions, Veterinary Services, as defined by the Terrestrial Animal Health Code (Terrestrial Code) of the World Organisation for Animal Health (OIE), help to improve animal health and reduce production losses. In this way they contribute directly and indirectly to food security and to safeguarding human health and economic resources. The organisation and operating procedures of Veterinary Services are therefore key to the efficient governance required to achieve these objectives. The OIE is a major player in global cooperation and governance in the fields of animal and public health through the implementation of its strategic standardisation mission and other programmes for the benefit of Veterinary Services and OIE Member Countries. Thus, the actions of Veterinary Services and the OIE deserve to be recognised as a global public good, backed by public investment to ensure that all Veterinary Services are in a position to apply the principles of good governance and to comply with the international standards for the quality of Veterinary Services set out in the OIE Terrestrial Code (Section 3 on Quality of Veterinary Services) and Aquatic Animal Health Code (Section 3 on Quality of Aquatic Animal Health Services).
Navier-Stokes analysis of a liquid rocket engine disk cavity
NASA Technical Reports Server (NTRS)
Benjamin, Theodore G.; Mcconnaughey, Paul K.
1991-01-01
This paper presents a Navier-Stokes analysis of hydrodynamic phenomena occurring in the aft disk cavity of a liquid rocket engine turbine. The cavity analyzed in the Space Shuttle Main Engine Alternate Turbopump currently being developed by NASA and Pratt and Whitney. Comparison of results obtained from the Navier-Stokes code for two rotating disk datasets available in the literature are presented as benchmark validations. The benchmark results obtained using the code show good agreement relative to experimental data, and the turbine disk cavity was analyzed with comparable grid resolution, dissipation levels, and turbulence models. Predicted temperatures in the cavity show that little mixing of hot and cold fluid occurs in the cavity and the flow is dominated by swirl and pumping up the rotating disk.
Gilmore-Bykovskyi, Andrea L
2015-01-01
Mealtime behavioral symptoms are distressing and frequently interrupt eating for the individual experiencing them and others in the environment. A computer-assisted coding scheme was developed to measure caregiver person-centeredness and behavioral symptoms for nursing home residents with dementia during mealtime interactions. The purpose of this pilot study was to determine the feasibility, ease of use, and inter-observer reliability of the coding scheme, and to explore the clinical utility of the coding scheme. Trained observers coded 22 observations. Data collection procedures were acceptable to participants. Overall, the coding scheme proved to be feasible, easy to execute and yielded good to very good inter-observer agreement following observer re-training. The coding scheme captured clinically relevant, modifiable antecedents to mealtime behavioral symptoms, but would be enhanced by the inclusion of measures for resident engagement and consolidation of items for measuring caregiver person-centeredness that co-occurred and were difficult for observers to distinguish. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Singh, R. P.; Ahmad, R.
2015-12-01
A comparison of recent observed ground motion parameters of recent Gorkha Nepal earthquake of 25 April 2015 (Mw 7.8) with the predicted ground motion parameters using exitsing attenuation relation of the Himalayan region will be presented. The recent earthquake took about 8000 lives and destroyed thousands of poor quality of buildings and the earthquake was felt by millions of people living in Nepal, China, India, Bangladesh, and Bhutan. The knowledge of ground parameters are very important in developing seismic code of seismic prone regions like Himalaya for better design of buildings. The ground parameters recorded in recent earthquake event and aftershocks are compared with attenuation relations for the Himalayan region, the predicted ground motion parameters show good correlation with the observed ground parameters. The results will be of great use to Civil engineers in updating existing building codes in the Himlayan and surrounding regions and also for the evaluation of seismic hazards. The results clearly show that the attenuation relation developed for the Himalayan region should be only used, other attenuation relations based on other regions fail to provide good estimate of observed ground motion parameters.
NASA Technical Reports Server (NTRS)
Komerath, Narayanan M.; Schreiber, Olivier A.
1987-01-01
The wake model was implemented using a VAX 750 and a Microvax II workstation. Online graphics capability using a DISSPLA graphics package. The rotor model used by Beddoes was significantly extended to include azimuthal variations due to forward flight and a simplified scheme for locating critical points where vortex elements are placed. A test case was obtained for validation of the predictions of induced velocity. Comparison of the results indicates that the code requires some more features before satisfactory predictions can be made over the whole rotor disk. Specifically, shed vorticity due to the azimuthal variation of blade loading must be incorporated into the model. Interactions between vortices shed from the four blades of the model rotor must be included. The Scully code for calculating the velocity field is being modified in parallel with these efforts to enable comparison with experimental data. To date, some comparisons with flow visualization data obtained at Georgia Tech were performed and show good agreement for the isolated rotor case. Comparison of time-resolved velocity data obtained at Georgia Tech also shows good agreement. Modifications are being implemented to enable generation of time-averaged results for comparison with NASA data.
Genetic Programming-based Phononic Bandgap Structure Design
2011-09-01
derivative-based methods is that they require a good starting location to find the global minimum of a function. As can be seen from figure 2, there are many... FRANCHI CODE 7100 M H ORR CODE 7120 J A BUCARO CODE 7130 G J ORRIS 7140 J S PERKINS CODE 7140 S A CHIN BING CODE 7180 4555 OVERLOOK AVE SW WASHINGTON DC
A charging study of ACTS using NASCAP
NASA Technical Reports Server (NTRS)
Herr, Joel L.
1991-01-01
The NASA Charging Analyzer Program (NASCAP) computer code is a three dimensional finite element charging code designed to analyze spacecraft charging in the magnetosphere. Because of the characteristics of this problem, NASCAP can use an quasi-static approach to provide a spacecraft designer with an understanding of how a specific spacecraft will interact with a geomagnetic substorm. The results of the simulation can help designers evaluate the probability and location of arc discharges of charged surfaces on the spacecraft. A charging study of NASA's Advanced Communication Technology Satellite (ACTS) using NASCAP is reported. The results show that the ACTS metalized multilayer insulating blanket design should provide good electrostatic discharge control.
Design of a digital voice data compression technique for orbiter voice channels
NASA Technical Reports Server (NTRS)
1975-01-01
Candidate techniques were investigated for digital voice compression to a transmission rate of 8 kbps. Good voice quality, speaker recognition, and robustness in the presence of error bursts were considered. The technique of delayed-decision adaptive predictive coding is described and compared with conventional adaptive predictive coding. Results include a set of experimental simulations recorded on analog tape. The two FM broadcast segments produced show the delayed-decision technique to be virtually undegraded or minimally degraded at .001 and .01 Viterbi decoder bit error rates. Preliminary estimates of the hardware complexity of this technique indicate potential for implementation in space shuttle orbiters.
magnum.fe: A micromagnetic finite-element simulation code based on FEniCS
NASA Astrophysics Data System (ADS)
Abert, Claas; Exl, Lukas; Bruckner, Florian; Drews, André; Suess, Dieter
2013-11-01
We have developed a finite-element micromagnetic simulation code based on the FEniCS package called magnum.fe. Here we describe the numerical methods that are applied as well as their implementation with FEniCS. We apply a transformation method for the solution of the demagnetization-field problem. A semi-implicit weak formulation is used for the integration of the Landau-Lifshitz-Gilbert equation. Numerical experiments show the validity of simulation results. magnum.fe is open source and well documented. The broad feature range of the FEniCS package makes magnum.fe a good choice for the implementation of novel micromagnetic finite-element algorithms.
Investigation of Near Shannon Limit Coding Schemes
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Kim, J.; Mo, Fan
1999-01-01
Turbo codes can deliver performance that is very close to the Shannon limit. This report investigates algorithms for convolutional turbo codes and block turbo codes. Both coding schemes can achieve performance near Shannon limit. The performance of the schemes is obtained using computer simulations. There are three sections in this report. First section is the introduction. The fundamental knowledge about coding, block coding and convolutional coding is discussed. In the second section, the basic concepts of convolutional turbo codes are introduced and the performance of turbo codes, especially high rate turbo codes, is provided from the simulation results. After introducing all the parameters that help turbo codes achieve such a good performance, it is concluded that output weight distribution should be the main consideration in designing turbo codes. Based on the output weight distribution, the performance bounds for turbo codes are given. Then, the relationships between the output weight distribution and the factors like generator polynomial, interleaver and puncturing pattern are examined. The criterion for the best selection of system components is provided. The puncturing pattern algorithm is discussed in detail. Different puncturing patterns are compared for each high rate. For most of the high rate codes, the puncturing pattern does not show any significant effect on the code performance if pseudo - random interleaver is used in the system. For some special rate codes with poor performance, an alternative puncturing algorithm is designed which restores their performance close to the Shannon limit. Finally, in section three, for iterative decoding of block codes, the method of building trellis for block codes, the structure of the iterative decoding system and the calculation of extrinsic values are discussed.
Design of the DEMO Fusion Reactor Following ITER.
Garabedian, Paul R; McFadden, Geoffrey B
2009-01-01
Runs of the NSTAB nonlinear stability code show there are many three-dimensional (3D) solutions of the advanced tokamak problem subject to axially symmetric boundary conditions. These numerical simulations based on mathematical equations in conservation form predict that the ITER international tokamak project will encounter persistent disruptions and edge localized mode (ELMS) crashes. Test particle runs of the TRAN transport code suggest that for quasineutrality to prevail in tokamaks a certain minimum level of 3D asymmetry of the magnetic spectrum is required which is comparable to that found in quasiaxially symmetric (QAS) stellarators. The computational theory suggests that a QAS stellarator with two field periods and proportions like those of ITER is a good candidate for a fusion reactor. For a demonstration reactor (DEMO) we seek an experiment that combines the best features of ITER, with a system of QAS coils providing external rotational transform, which is a measure of the poloidal field. We have discovered a configuration with unusually good quasisymmetry that is ideal for this task.
Design of the DEMO Fusion Reactor Following ITER
Garabedian, Paul R.; McFadden, Geoffrey B.
2009-01-01
Runs of the NSTAB nonlinear stability code show there are many three-dimensional (3D) solutions of the advanced tokamak problem subject to axially symmetric boundary conditions. These numerical simulations based on mathematical equations in conservation form predict that the ITER international tokamak project will encounter persistent disruptions and edge localized mode (ELMS) crashes. Test particle runs of the TRAN transport code suggest that for quasineutrality to prevail in tokamaks a certain minimum level of 3D asymmetry of the magnetic spectrum is required which is comparable to that found in quasiaxially symmetric (QAS) stellarators. The computational theory suggests that a QAS stellarator with two field periods and proportions like those of ITER is a good candidate for a fusion reactor. For a demonstration reactor (DEMO) we seek an experiment that combines the best features of ITER, with a system of QAS coils providing external rotational transform, which is a measure of the poloidal field. We have discovered a configuration with unusually good quasisymmetry that is ideal for this task. PMID:27504224
Multicore-based 3D-DWT video encoder
NASA Astrophysics Data System (ADS)
Galiano, Vicente; López-Granado, Otoniel; Malumbres, Manuel P.; Migallón, Hector
2013-12-01
Three-dimensional wavelet transform (3D-DWT) encoders are good candidates for applications like professional video editing, video surveillance, multi-spectral satellite imaging, etc. where a frame must be reconstructed as quickly as possible. In this paper, we present a new 3D-DWT video encoder based on a fast run-length coding engine. Furthermore, we present several multicore optimizations to speed-up the 3D-DWT computation. An exhaustive evaluation of the proposed encoder (3D-GOP-RL) has been performed, and we have compared the evaluation results with other video encoders in terms of rate/distortion (R/D), coding/decoding delay, and memory consumption. Results show that the proposed encoder obtains good R/D results for high-resolution video sequences with nearly in-place computation using only the memory needed to store a group of pictures. After applying the multicore optimization strategies over the 3D DWT, the proposed encoder is able to compress a full high-definition video sequence in real-time.
Hypersonic CFD applications for the National Aero-Space Plane
NASA Technical Reports Server (NTRS)
Richardson, Pamela F.; Mcclinton, Charles R.; Bittner, Robert D.; Dilley, A. Douglas; Edwards, Kelvin W.
1989-01-01
Design and analysis of the NASP depends heavily upon developing the critical technology areas that cover the entire engineering design of the vehicle. These areas include materials, structures, propulsion systems, propellants, integration of airframe and propulsion systems, controls, subsystems, and aerodynamics areas. Currently, verification of many of the classical engineering tools relies heavily on computational fluid dynamics. Advances are being made in the development of CFD codes to accomplish nose-to-tail analyses for hypersonic aircraft. Additional details involving the partial development, analysis, verification, and application of the CFL3D code and the SPARK combustor code are discussed. A nonequilibrium version of CFL3D that is presently being developed and tested is also described. Examples are given of portion calculations for research hypersonic aircraft geometries and comparisons with experiment data show good agreement.
Bélanger, Nathalie N; Mayberry, Rachel I; Rayner, Keith
2013-01-01
Many deaf individuals do not develop the high-level reading skills that will allow them to fully take part into society. To attempt to explain this widespread difficulty in the deaf population, much research has honed in on the use of phonological codes during reading. The hypothesis that the use of phonological codes is associated with good reading skills in deaf readers, though not well supported, still lingers in the literature. We investigated skilled and less-skilled adult deaf readers' processing of orthographic and phonological codes in parafoveal vision during reading by monitoring their eye movements and using the boundary paradigm. Orthographic preview benefits were found in early measures of reading for skilled hearing, skilled deaf, and less-skilled deaf readers, but only skilled hearing readers processed phonological codes in parafoveal vision. Crucially, skilled and less-skilled deaf readers showed a very similar pattern of preview benefits during reading. These results support the notion that reading difficulties in deaf adults are not linked to their failure to activate phonological codes during reading.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1989-01-01
The performance of bandwidth efficient trellis codes on channels with phase jitter, or those disturbed by jamming and impulse noise is analyzed. An heuristic algorithm for construction of bandwidth efficient trellis codes with any constraint length up to about 30, any signal constellation, and any code rate was developed. Construction of good distance profile trellis codes for sequential decoding and comparison of random coding bounds of trellis coded modulation schemes are also discussed.
A robust recognition and accurate locating method for circular coded diagonal target
NASA Astrophysics Data System (ADS)
Bao, Yunna; Shang, Yang; Sun, Xiaoliang; Zhou, Jiexin
2017-10-01
As a category of special control points which can be automatically identified, artificial coded targets have been widely developed in the field of computer vision, photogrammetry, augmented reality, etc. In this paper, a new circular coded target designed by RockeTech technology Corp. Ltd is analyzed and studied, which is called circular coded diagonal target (CCDT). A novel detection and recognition method with good robustness is proposed in the paper, and implemented on Visual Studio. In this algorithm, firstly, the ellipse features of the center circle are used for rough positioning. Then, according to the characteristics of the center diagonal target, a circular frequency filter is designed to choose the correct center circle and eliminates non-target noise. The precise positioning of the coded target is done by the correlation coefficient fitting extreme value method. Finally, the coded target recognition is achieved by decoding the binary sequence in the outer ring of the extracted target. To test the proposed algorithm, this paper has carried out simulation experiments and real experiments. The results show that the CCDT recognition and accurate locating method proposed in this paper can robustly recognize and accurately locate the targets in complex and noisy background.
Avidan, Alexander; Weissman, Charles; Levin, Phillip D
2015-04-01
Quick response (QR) codes containing anesthesia syllabus data were introduced into an anesthesia information management system. The code was generated automatically at the conclusion of each case and available for resident case logging using a smartphone or tablet. The goal of this study was to evaluate the use and usability/user-friendliness of such system. Resident case logging practices were assessed prior to introducing the QR codes. QR code use and satisfactions amongst residents was reassessed at three and six months. Before QR code introduction only 12/23 (52.2%) residents maintained a case log. Most of the remaining residents (9/23, 39.1%) expected to receive a case list from the anesthesia information management system database at the end of their residency. At three months and six months 17/26 (65.4%) and 15/25 (60.0%) residents, respectively, were using the QR codes. Satisfaction was rated as very good or good. QR codes for residents' case logging with smartphones or tablets were successfully introduced in an anesthesia information management system and used by most residents. QR codes can be successfully implemented into medical practice to support data transfer. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
The location and recognition of anti-counterfeiting code image with complex background
NASA Astrophysics Data System (ADS)
Ni, Jing; Liu, Quan; Lou, Ping; Han, Ping
2017-07-01
The order of cigarette market is a key issue in the tobacco business system. The anti-counterfeiting code, as a kind of effective anti-counterfeiting technology, can identify counterfeit goods, and effectively maintain the normal order of market and consumers' rights and interests. There are complex backgrounds, light interference and other problems in the anti-counterfeiting code images obtained by the tobacco recognizer. To solve these problems, the paper proposes a locating method based on Susan operator, combined with sliding window and line scanning,. In order to reduce the interference of background and noise, we extract the red component of the image and convert the color image into gray image. For the confusing characters, recognition results correction based on the template matching method has been adopted to improve the recognition rate. In this method, the anti-counterfeiting code can be located and recognized correctly in the image with complex background. The experiment results show the effectiveness and feasibility of the approach.
Kuroki, Naomi; Miyashita, Nana; Hino, Yoshiyuki; Kayashima, Kotaro; Fujino, Yoshihisa; Takada, Mikio; Nagata, Tomohisa; Yamataki, Hajime; Sakuragi, Sonoko; Kan, Hirohiko; Morita, Tetsuya; Ito, Akiyoshi; Mori, Koji
2009-09-01
The purpose of this study was to identify what motivates employers to promote good occupational health and safety practices in small-scale enterprises. Previous studies have shown that small-scale enterprises generally pay insufficient attention to issues of occupational health and safety. These findings were mainly derived from questionnaire based surveys. Nevertheless, some small-scale enterprises in which employers exercise good leadership do take a progressive approach to occupational health and safety. Although good practices can be identified in small-scale enterprises, it remains unclear what motivates employers in small-scale enterprises to actively implement occupational health and safety practices. We speculated that identifying employer motivations in promoting occupational health would help to spread good practices among small-scale enterprises. Using a qualitative approach based on the KJ methods, we interviewed ten employers who actively promote occupational health and safety in the workplace. The employers were asked to discuss their views of occupational health and safety in their own words. A semi-structured interview format was used, and transcripts were made of the interviews. Each transcript was independently coded by two or more researchers. These transcripts and codes were integrated and then the research group members discussed the heading titles and structural relationships between them according to the KJ method. Qualitative analysis revealed that all the employers expressed a strong interest in a "good company" and "good management". They emphasized four elements of "good management", namely "securing human resources", "trust of business partners", "social responsibility" and "employer's health condition itself", and considered that addressing occupational health and safety was essential to the achievement of these four elements. Consistent with previous findings, the results showed that implementation of occupational health and safety activities depended on "cost", "human resources", "time to perform", and "advisory organization". These results suggest that employer awareness of the relationship between good management and occupational health is essential to the implementation of occupational health and safety practices in small-scale enterprises.
Diffusive deposition of aerosols in Phebus containment during FPT-2 test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontautas, A.; Urbonavicius, E.
2012-07-01
At present the lumped-parameter codes is the main tool to investigate the complex response of the containment of Nuclear Power Plant in case of an accident. Continuous development and validation of the codes is required to perform realistic investigation of the processes that determine the possible source term of radioactive products to the environment. Validation of the codes is based on the comparison of the calculated results with the measurements performed in experimental facilities. The most extensive experimental program to investigate fission product release from the molten fuel, transport through the cooling circuit and deposition in the containment is performedmore » in PHEBUS test facility. Test FPT-2 performed in this facility is considered for analysis of processes taking place in containment. Earlier performed investigations using COCOSYS code showed that the code could be successfully used for analysis of thermal-hydraulic processes and deposition of aerosols, but there was also noticed that diffusive deposition on the vertical walls does not fit well with the measured results. In the CPA module of ASTEC code there is implemented different model for diffusive deposition, therefore the PHEBUS containment model was transferred from COCOSYS code to ASTEC-CPA to investigate the influence of the diffusive deposition modelling. Analysis was performed using PHEBUS containment model of 16 nodes. The calculated thermal-hydraulic parameters are in good agreement with measured results, which gives basis for realistic simulation of aerosol transport and deposition processes. Performed investigations showed that diffusive deposition model has influence on the aerosol deposition distribution on different surfaces in the test facility. (authors)« less
SNL/JAEA Collaborations on Sodium Fire Benchmarking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, Andrew Jordan; Denman, Matthew R; Takata, Takashi
Two sodium spray fire experiments performed by Sandia National Laboratories (SNL) were used for a code - to - code comparison between CONTAIN - LMR and SPHINCS. Both computer codes are used for modeling sodium accidents in sodium fast reactors. The comparison between the two codes provides insights into the ability of both codes to model sodium spray fires. The SNL T3 and T4 experiments are 20 kg sodium spray fires with sodium spray temperature s of 200 deg C and 500 deg C, respe ctively. Given the relatively low sodium temperature in the SNL T3 experiment, the sodium spraymore » experienced a period of non - combustion. The vessel in the SNL T4 experiment experienced a rapid pressurization that caused of the instrumentation ports to fail during the sodium spray. Despite these unforeseen difficulties, both codes were shown in good agreement with the experiment s . The subsequent pool fire that develops from the unburned sodium spray is a significant characteristic of the T3 experiment. SPHIN CS showed better long - term agreement with the SNL T3 experiment than CONTAIN - LMR. The unexpected port failure during the SNL T4 experiment presented modelling challenges. The time at which the port failure occurred is unknown, but is believed to have occur red at about 11 seconds into the sodium spray fire. The sensitivity analysis for the SNL T4 experiment shows that with a port failure, the sodium spray fire can still maintain elevated pressures during the spray.« less
An ultrasound transient elastography system with coded excitation.
Diao, Xianfen; Zhu, Jing; He, Xiaonian; Chen, Xin; Zhang, Xinyu; Chen, Siping; Liu, Weixiang
2017-06-28
Ultrasound transient elastography technology has found its place in elastography because it is safe and easy to operate. However, it's application in deep tissue is limited. The aim of this study is to design an ultrasound transient elastography system with coded excitation to obtain greater detection depth. The ultrasound transient elastography system requires tissue vibration to be strictly synchronous with ultrasound detection. Therefore, an ultrasound transient elastography system with coded excitation was designed. A central component of this transient elastography system was an arbitrary waveform generator with multi-channel signals output function. This arbitrary waveform generator was used to produce the tissue vibration signal, the ultrasound detection signal and the synchronous triggering signal of the radio frequency data acquisition system. The arbitrary waveform generator can produce different forms of vibration waveform to induce different shear wave propagation in the tissue. Moreover, it can achieve either traditional pulse-echo detection or a phase-modulated or a frequency-modulated coded excitation. A 7-chip Barker code and traditional pulse-echo detection were programmed on the designed ultrasound transient elastography system to detect the shear wave in the phantom excited by the mechanical vibrator. Then an elasticity QA phantom and sixteen in vitro rat livers were used for performance evaluation of the two detection pulses. The elasticity QA phantom's results show that our system is effective, and the rat liver results show the detection depth can be increased more than 1 cm. In addition, the SNR (signal-to-noise ratio) is increased by 15 dB using the 7-chip Barker coded excitation. Applying 7-chip Barker coded excitation technique to the ultrasound transient elastography can increase the detection depth and SNR. Using coded excitation technology to assess the human liver, especially in obese patients, may be a good choice.
Validation of Heat Transfer and Film Cooling Capabilities of the 3-D RANS Code TURBO
NASA Technical Reports Server (NTRS)
Shyam, Vikram; Ameri, Ali; Chen, Jen-Ping
2010-01-01
The capabilities of the 3-D unsteady RANS code TURBO have been extended to include heat transfer and film cooling applications. The results of simulations performed with the modified code are compared to experiment and to theory, where applicable. Wilcox s k-turbulence model has been implemented to close the RANS equations. Two simulations are conducted: (1) flow over a flat plate and (2) flow over an adiabatic flat plate cooled by one hole inclined at 35 to the free stream. For (1) agreement with theory is found to be excellent for heat transfer, represented by local Nusselt number, and quite good for momentum, as represented by the local skin friction coefficient. This report compares the local skin friction coefficients and Nusselt numbers on a flat plate obtained using Wilcox's k-model with the theory of Blasius. The study looks at laminar and turbulent flows over an adiabatic flat plate and over an isothermal flat plate for two different wall temperatures. It is shown that TURBO is able to accurately predict heat transfer on a flat plate. For (2) TURBO shows good qualitative agreement with film cooling experiments performed on a flat plate with one cooling hole. Quantitatively, film effectiveness is under predicted downstream of the hole.
Project DIPOLE WEST - Multiburst Environment (Non-Simultaneous Detonations)
1976-09-01
PAGE (WIMn Dat• Bntered) Unclassified SECURITY CLASSIFICATION OP’ THIS PAGE(ft• Data .Bnt......, 20. Abstract Purpose of the series was to obtain...HULL hydrodynamic air blast code show good correlation. UNCLASSIFIED SECUFUTY CLASSIFICATION OF THIS PA.GE(When Date Bntered) • • 1...supervision. Contributions were also made by Dr. John Dewey, University of Victoria; Mr. A. P. R. Lambert, Canadian General Electric; Mr. Charles Needham
NASA Astrophysics Data System (ADS)
Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong
2017-07-01
The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.
NASA Astrophysics Data System (ADS)
Boumehrez, Farouk; Brai, Radhia; Doghmane, Noureddine; Mansouri, Khaled
2018-01-01
Recently, video streaming has attracted much attention and interest due to its capability to process and transmit large data. We propose a quality of experience (QoE) model relying on high efficiency video coding (HEVC) encoder adaptation scheme, in turn based on the multiple description coding (MDC) for video streaming. The main contributions of the paper are (1) a performance evaluation of the new and emerging video coding standard HEVC/H.265, which is based on the variation of quantization parameter (QP) values depending on different video contents to deduce their influence on the sequence to be transmitted, (2) QoE support multimedia applications in wireless networks are investigated, so we inspect the packet loss impact on the QoE of transmitted video sequences, (3) HEVC encoder parameter adaptation scheme based on MDC is modeled with the encoder parameter and objective QoE model. A comparative study revealed that the proposed MDC approach is effective for improving the transmission with a peak signal-to-noise ratio (PSNR) gain of about 2 to 3 dB. Results show that a good choice of QP value can compensate for transmission channel effects and improve received video quality, although HEVC/H.265 is also sensitive to packet loss. The obtained results show the efficiency of our proposed method in terms of PSNR and mean-opinion-score.
Benchmarking kinetic calculations of resistive wall mode stability
NASA Astrophysics Data System (ADS)
Berkery, J. W.; Liu, Y. Q.; Wang, Z. R.; Sabbagh, S. A.; Logan, N. C.; Park, J.-K.; Manickam, J.; Betti, R.
2014-05-01
Validating the calculations of kinetic resistive wall mode (RWM) stability is important for confidently predicting RWM stable operating regions in ITER and other high performance tokamaks for disruption avoidance. Benchmarking the calculations of the Magnetohydrodynamic Resistive Spectrum—Kinetic (MARS-K) [Y. Liu et al., Phys. Plasmas 15, 112503 (2008)], Modification to Ideal Stability by Kinetic effects (MISK) [B. Hu et al., Phys. Plasmas 12, 057301 (2005)], and Perturbed Equilibrium Nonambipolar Transport PENT) [N. Logan et al., Phys. Plasmas 20, 122507 (2013)] codes for two Solov'ev analytical equilibria and a projected ITER equilibrium has demonstrated good agreement between the codes. The important particle frequencies, the frequency resonance energy integral in which they are used, the marginally stable eigenfunctions, perturbed Lagrangians, and fluid growth rates are all generally consistent between the codes. The most important kinetic effect at low rotation is the resonance between the mode rotation and the trapped thermal particle's precession drift, and MARS-K, MISK, and PENT show good agreement in this term. The different ways the rational surface contribution was treated historically in the codes is identified as a source of disagreement in the bounce and transit resonance terms at higher plasma rotation. Calculations from all of the codes support the present understanding that RWM stability can be increased by kinetic effects at low rotation through precession drift resonance and at high rotation by bounce and transit resonances, while intermediate rotation can remain susceptible to instability. The applicability of benchmarked kinetic stability calculations to experimental results is demonstrated by the prediction of MISK calculations of near marginal growth rates for experimental marginal stability points from the National Spherical Torus Experiment (NSTX) [M. Ono et al., Nucl. Fusion 40, 557 (2000)].
Simulation of nonlinear propagation of biomedical ultrasound using PZFlex and the KZK Texas code
NASA Astrophysics Data System (ADS)
Qiao, Shan; Jackson, Edward; Coussios, Constantin-C.; Cleveland, Robin
2015-10-01
In biomedical ultrasound nonlinear acoustics can be important in both diagnostic and therapeutic applications and robust simulations tools are needed in the design process but also for day-to-day use such as treatment planning. For most biomedical application the ultrasound sources generate focused sound beams of finite amplitude. The KZK equation is a common model as it accounts for nonlinearity, absorption and paraxial diffraction and there are a number of solvers available, primarily developed by research groups. We compare the predictions of the KZK Texas code (a finite-difference time-domain algorithm) to an FEM-based commercial software, PZFlex. PZFlex solves the continuity equation and momentum conservation equation with a correction for nonlinearity in the equation of state incorporated using an incrementally linear, 2nd order accurate, explicit algorithm in time domain. Nonlinear ultrasound beams from two transducers driven at 1 MHz and 3.3 MHz respectively were simulated by both the KZK Texas code and PZFlex, and the pressure field was also measured by a fibre-optic hydrophone to validate the models. Further simulations were carried out a wide range of frequencies. The comparisons showed good agreement for the fundamental frequency for PZFlex, the KZK Texas code and the experiments. For the harmonic components, the KZK Texas code was in good agreement with measurements but PZFlex underestimated the amplitude: 32% for the 2nd harmonic and 66% for the 3rd harmonic. The underestimation of harmonics by PZFlex was more significant when the fundamental frequency increased. Furthermore non-physical oscillations in the axial profile of harmonics occurred in the PZFlex results when the amplitudes were relatively low. These results suggest that careful benchmarking of nonlinear simulations is important.
Simulation of nonlinear propagation of biomedical ultrasound using PZFlex and the KZK Texas code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiao, Shan, E-mail: shan.qiao@eng.ox.ac.uk; Jackson, Edward; Coussios, Constantin-C
In biomedical ultrasound nonlinear acoustics can be important in both diagnostic and therapeutic applications and robust simulations tools are needed in the design process but also for day-to-day use such as treatment planning. For most biomedical application the ultrasound sources generate focused sound beams of finite amplitude. The KZK equation is a common model as it accounts for nonlinearity, absorption and paraxial diffraction and there are a number of solvers available, primarily developed by research groups. We compare the predictions of the KZK Texas code (a finite-difference time-domain algorithm) to an FEM-based commercial software, PZFlex. PZFlex solves the continuity equationmore » and momentum conservation equation with a correction for nonlinearity in the equation of state incorporated using an incrementally linear, 2nd order accurate, explicit algorithm in time domain. Nonlinear ultrasound beams from two transducers driven at 1 MHz and 3.3 MHz respectively were simulated by both the KZK Texas code and PZFlex, and the pressure field was also measured by a fibre-optic hydrophone to validate the models. Further simulations were carried out a wide range of frequencies. The comparisons showed good agreement for the fundamental frequency for PZFlex, the KZK Texas code and the experiments. For the harmonic components, the KZK Texas code was in good agreement with measurements but PZFlex underestimated the amplitude: 32% for the 2nd harmonic and 66% for the 3rd harmonic. The underestimation of harmonics by PZFlex was more significant when the fundamental frequency increased. Furthermore non-physical oscillations in the axial profile of harmonics occurred in the PZFlex results when the amplitudes were relatively low. These results suggest that careful benchmarking of nonlinear simulations is important.« less
Chibani, Omar; Li, X Allen
2002-05-01
Three Monte Carlo photon/electron transport codes (GEPTS, EGSnrc, and MCNP) are bench-marked against dose measurements in homogeneous (both low- and high-Z) media as well as at interfaces. A brief overview on physical models used by each code for photon and electron (positron) transport is given. Absolute calorimetric dose measurements for 0.5 and 1 MeV electron beams incident on homogeneous and multilayer media are compared with the predictions of the three codes. Comparison with dose measurements in two-layer media exposed to a 60Co gamma source is also performed. In addition, comparisons between the codes (including the EGS4 code) are done for (a) 0.05 to 10 MeV electron beams and positron point sources in lead, (b) high-energy photons (10 and 20 MeV) irradiating a multilayer phantom (water/steel/air), and (c) simulation of a 90Sr/90Y brachytherapy source. A good agreement is observed between the calorimetric electron dose measurements and predictions of GEPTS and EGSnrc in both homogeneous and multilayer media. MCNP outputs are found to be dependent on the energy-indexing method (Default/ITS style). This dependence is significant in homogeneous media as well as at interfaces. MCNP(ITS) fits more closely the experimental data than MCNP(DEF), except for the case of Be. At low energy (0.05 and 0.1 MeV), MCNP(ITS) dose distributions in lead show higher maximums in comparison with GEPTS and EGSnrc. EGS4 produces too penetrating electron-dose distributions in high-Z media, especially at low energy (<0.1 MeV). For positrons, differences between GEPTS and EGSnrc are observed in lead because GEPTS distinguishes positrons from electrons for both elastic multiple scattering and bremsstrahlung emission models. For the 60Co source, a quite good agreement between calculations and measurements is observed with regards to the experimental uncertainty. For the other cases (10 and 20 MeV photon sources and the 90Sr/90Y beta source), a good agreement is found between the three codes. In conclusion, differences between GEPTS and EGSnrc results are found to be very small for almost all media and energies studied. MCNP results depend significantly on the electron energy-indexing method.
Numerical Modeling of Active Flow Control in a Boundary Layer Ingesting Offset Inlet
NASA Technical Reports Server (NTRS)
Allan, Brian G.; Owens, Lewis R.; Berrier, Bobby L.
2004-01-01
This investigation evaluates the numerical prediction of flow distortion and pressure recovery for a boundary layer ingesting offset inlet with active flow control devices. The numerical simulations are computed using a Reynolds averaged Navier-Stokes code developed at NASA. The numerical results are validated by comparison to experimental wind tunnel tests conducted at NASA Langley Research Center at both low and high Mach numbers. Baseline comparisons showed good agreement between numerical and experimental results. Numerical simulations for the inlet with passive and active flow control also showed good agreement at low Mach numbers where experimental data has already been acquired. Numerical simulations of the inlet at high Mach numbers with flow control jets showed an improvement of the flow distortion. Studies on the location of the jet actuators, for the high Mach number case, were conducted to provide guidance for the design of a future experimental wind tunnel test.
Efficient Cache use for Stencil Operations on Structured Discretization Grids
NASA Technical Reports Server (NTRS)
Frumkin, Michael; VanderWijngaart, Rob F.
2001-01-01
We derive tight bounds on the cache misses for evaluation of explicit stencil operators on structured grids. Our lower bound is based on the isoperimetrical property of the discrete octahedron. Our upper bound is based on a good surface to volume ratio of a parallelepiped spanned by a reduced basis of the interference lattice of a grid. Measurements show that our algorithm typically reduces the number of cache misses by a factor of three, relative to a compiler optimized code. We show that stencil calculations on grids whose interference lattice have a short vector feature abnormally high numbers of cache misses. We call such grids unfavorable and suggest to avoid these in computations by appropriate padding. By direct measurements on a MIPS R10000 processor we show a good correlation between abnormally high numbers of cache misses and unfavorable three-dimensional grids.
NASA Technical Reports Server (NTRS)
Hwang, D. P.; Boldman, D. R.; Hughes, C. E.
1994-01-01
An axisymmetric panel code and a three dimensional Navier-Stokes code (used as an inviscid Euler code) were verified for low speed, high angle of attack flow conditions. A three dimensional Navier-Stokes code (used as an inviscid code), and an axisymmetric Navier-Stokes code (used as both viscous and inviscid code) were also assessed for high Mach number cruise conditions. The boundary layer calculations were made by using the results from the panel code or Euler calculation. The panel method can predict the internal surface pressure distributions very well if no shock exists. However, only Euler and Navier-Stokes calculations can provide a good prediction of the surface static pressure distribution including the pressure rise across the shock. Because of the high CPU time required for a three dimensional Navier-Stokes calculation, only the axisymmetric Navier-Stokes calculation was considered at cruise conditions. The use of suction and tangential blowing boundary layer control to eliminate the flow separation on the internal surface was demonstrated for low free stream Mach number and high angle of attack cases. The calculation also shows that transition from laminar flow to turbulent flow on the external cowl surface can be delayed by using suction boundary layer control at cruise flow conditions. The results were compared with experimental data where possible.
Total x-ray power measurements in the Sandia LIGA program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malinowski, Michael E.; Ting, Aili
2005-08-01
Total X-ray power measurements using aluminum block calorimetry and other techniques were made at LIGA X-ray scanner synchrotron beamlines located at both the Advanced Light Source (ALS) and the Advanced Photon Source (APS). This block calorimetry work was initially performed on the LIGA beamline 3.3.1 of the ALS to provide experimental checks of predictions of the LEX-D (LIGA Exposure- Development) code for LIGA X-ray exposures, version 7.56, the version of the code in use at the time calorimetry was done. These experiments showed that it was necessary to use bend magnet field strengths and electron storage ring energies different frommore » the default values originally in the code in order to obtain good agreement between experiment and theory. The results indicated that agreement between LEX-D predictions and experiment could be as good as 5% only if (1) more accurate values of the ring energies, (2) local values of the magnet field at the beamline source point, and (3) the NIST database for X-ray/materials interactions were used as code inputs. These local magnetic field value and accurate ring energies, together with NIST database, are now defaults in the newest release of LEX-D, version 7.61. Three dimensional simulations of the temperature distributions in the aluminum calorimeter block for a typical ALS power measurement were made with the ABAQUS code and found to be in good agreement with the experimental temperature data. As an application of the block calorimetry technique, the X-ray power exiting the mirror in place at a LIGA scanner located at the APS beamline 10 BM was measured with a calorimeter similar to the one used at the ALS. The overall results at the APS demonstrated the utility of calorimetry in helping to characterize the total X-ray power in LIGA beamlines. In addition to the block calorimetry work at the ALS and APS, a preliminary comparison of the use of heat flux sensors, photodiodes and modified beam calorimeters as total X-ray power monitors was made at the ALS, beamline 3.3.1. This work showed that a modification of a commercially available, heat flux sensor could result in a simple, direct reading beam power meter that could be a useful for monitoring total X-ray power in Sandia's LIGA exposure stations at the ALS, APS and Stanford Synchrotron Radiation Laboratory (SSRL).« less
Fast methods to numerically integrate the Reynolds equation for gas fluid films
NASA Technical Reports Server (NTRS)
Dimofte, Florin
1992-01-01
The alternating direction implicit (ADI) method is adopted, modified, and applied to the Reynolds equation for thin, gas fluid films. An efficient code is developed to predict both the steady-state and dynamic performance of an aerodynamic journal bearing. An alternative approach is shown for hybrid journal gas bearings by using Liebmann's iterative solution (LIS) for elliptic partial differential equations. The results are compared with known design criteria from experimental data. The developed methods show good accuracy and very short computer running time in comparison with methods based on an inverting of a matrix. The computer codes need a small amount of memory and can be run on either personal computers or on mainframe systems.
NASA Astrophysics Data System (ADS)
Liu, Z. X.; Xu, X. Q.; Gao, X.; Xia, T. Y.; Joseph, I.; Meyer, W. H.; Liu, S. C.; Xu, G. S.; Shao, L. M.; Ding, S. Y.; Li, G. Q.; Li, J. G.
2014-09-01
Experimental measurements of edge localized modes (ELMs) observed on the EAST experiment are compared to linear and nonlinear theoretical simulations of peeling-ballooning modes using the BOUT++ code. Simulations predict that the dominant toroidal mode number of the ELM instability becomes larger for lower current, which is consistent with the mode structure captured with visible light using an optical CCD camera. The poloidal mode number of the simulated pressure perturbation shows good agreement with the filamentary structure observed by the camera. The nonlinear simulation is also consistent with the experimentally measured energy loss during an ELM crash and with the radial speed of ELM effluxes measured using a gas puffing imaging diagnostic.
Network coding multiuser scheme for indoor visible light communications
NASA Astrophysics Data System (ADS)
Zhang, Jiankun; Dang, Anhong
2017-12-01
Visible light communication (VLC) is a unique alternative for indoor data transfer and developing beyond point-to-point. However, for realizing high-capacity networks, VLC is facing challenges including the constrained bandwidth of the optical access point and random occlusion. A network coding scheme for VLC (NC-VLC) is proposed, with increased throughput and system robustness. Based on the Lambertian illumination model, theoretical decoding failure probability of the multiuser NC-VLC system is derived, and the impact of the system parameters on the performance is analyzed. Experiments demonstrate the proposed scheme successfully in the indoor multiuser scenario. These results indicate that the NC-VLC system shows a good performance under the link loss and random occlusion.
Tan, Edwin T.; Martin, Sarah R.; Fortier, Michelle A.; Kain, Zeev N.
2012-01-01
Objective To develop and validate a behavioral coding measure, the Children's Behavior Coding System-PACU (CBCS-P), for children's distress and nondistress behaviors while in the postanesthesia recovery unit. Methods A multidisciplinary team examined videotapes of children in the PACU and developed a coding scheme that subsequently underwent a refinement process (CBCS-P). To examine the reliability and validity of the coding system, 121 children and their parents were videotaped during their stay in the PACU. Participants were healthy children undergoing elective, outpatient surgery and general anesthesia. The CBCS-P was utilized and objective data from medical charts (analgesic consumption and pain scores) were extracted to establish validity. Results Kappa values indicated good-to-excellent (κ's > .65) interrater reliability of the individual codes. The CBCS-P had good criterion validity when compared to children's analgesic consumption and pain scores. Conclusions The CBCS-P is a reliable, observational coding method that captures children's distress and nondistress postoperative behaviors. These findings highlight the importance of considering context in both the development and application of observational coding schemes. PMID:22167123
Wang, Lanlan; Ma, Rongna; Jiang, Liushan; Jia, Liping; Jia, Wenli; Wang, Huaisheng
2017-06-15
A novel dual-signal ratiometric electrochemical aptasensor for highly sensitive and selective detection of thrombin has been designed on the basis of signal-on and signal-off strategy. Ferrocene labeled hairpin probe (Fc-HP), thrombin aptamer and methyl blue labeled bio-bar-coded AuNPs (MB-P3-AuNPs) were rationally introduced for the construction of the assay platform, which combined the advantages of the recognition of aptamer, the amplification of bio-bar-coded nanoprobe, and the ratiometric signaling readout. In the presence of thrombin, the interaction between thrombin and the aptamer leads to the departure of MB-P3-AuNPs from the sensing interface, and the conformation of the single stranded Fc-HP to a hairpin structure to take the Fc confined near the electrode surface. Such conformational changes resulted in the oxidation current of Fc increased and that of MB decreased. Therefore, the recognition event of the target can be dual-signal ratiometric electrochemical readout in both the "signal-off" of MB and the "signal-on" of Fc. The proposed strategy showed a wide linear detection range from 0.003 to 30nM with a detection limit of 1.1 pM. Moreover, it exhibits good performance of excellent selectivity, good stability, and acceptable fabrication reproducibility. By changing the recognition probe, this protocol could be easily expanded into the detection of other targets, showing promising potential applications in disease diagnostics and bioanalysis. Copyright © 2016. Published by Elsevier B.V.
Leth-Steensen, Craig; Citta, Richie
2016-01-01
Performance in numerical classification tasks involving either parity or magnitude judgements is quicker when small numbers are mapped onto a left-sided response and large numbers onto a right-sided response than for the opposite mapping (i.e., the spatial-numerical association of response codes or SNARC effect). Recent research by Gevers et al. [Gevers, W., Santens, S., Dhooge, E., Chen, Q., Van den Bossche, L., Fias, W., & Verguts, T. (2010). Verbal-spatial and visuospatial coding of number-space interactions. Journal of Experimental Psychology: General, 139, 180-190] suggests that this effect also arises for vocal "left" and "right" responding, indicating that verbal-spatial coding has a role to play in determining it. Another presumably verbal-based, spatial-numerical mapping phenomenon is the linguistic markedness association of response codes (MARC) effect whereby responding in parity tasks is quicker when odd numbers are mapped onto left-sided responses and even numbers onto right-sided responses. A recent account of both the SNARC and MARC effects is based on the polarity correspondence principle [Proctor, R. W., & Cho, Y. S. (2006). Polarity correspondence: A general principle for performance of speeded binary classification tasks. Psychological Bulletin, 132, 416-442]. This account assumes that stimulus and response alternatives are coded along any number of dimensions in terms of - and + polarities with quicker responding when the polarity codes for the stimulus and the response correspond. In the present study, even-odd parity judgements were made using either "left" and "right" or "bad" and "good" vocal responses. Results indicated that a SNARC effect was indeed present for the former type of vocal responding, providing further evidence for the sufficiency of the verbal-spatial coding account for this effect. However, the decided lack of an analogous SNARC-like effect in the results for the latter type of vocal responding provides an important constraint on the presumed generality of the polarity correspondence account. On the other hand, the presence of robust MARC effects for "bad" and "good" but not "left" and "right" vocal responses is consistent with the view that such effects are due to conceptual associations between semantic codes for odd-even and bad-good (but not necessarily left-right).
Supersonic dynamic stability characteristics of the test technique demonstrator NASP configuration
NASA Technical Reports Server (NTRS)
Dress, David A.; Boyden, Richmond P.; Cruz, Christopher I.
1992-01-01
Wind tunnel tests of a National Aero-Space Plane (NASP) configuration were conducted in both test sections of the Langley Unitary Plan Wind Tunnel. The model used is a Langley designed blended body NASP configuration. Dynamic stability characteristics were measured on this configuration at Mach numbers of 2.0, 2.5, 3.5, and 4.5. In addition to tests of the baseline configuration, component buildup tests were conducted. The test results show that the baseline configuration generally has positive damping about all three axes with only isolated exceptions. In addition, there was generally good agreement between the in-pulse dynamic parameters and the corresponding static data which were measured during another series of tests in the Unitary Plan Wind Tunnel. Also included are comparisons of the experimental damping parameters with results from the engineering predictive code APAS (Aerodynamic Preliminary Analysis System). These comparisons show good agreement at low angles of attack; however, the comparisons are generally not as good at the higher angles of attack.
NASA Technical Reports Server (NTRS)
Bogert, Philip B.; Satyanarayana, Arunkumar; Chunchu, Prasad B.
2006-01-01
Splitting, ultimate failure load and the damage path in center notched composite specimens subjected to in-plane tension loading are predicted using progressive failure analysis methodology. A 2-D Hashin-Rotem failure criterion is used in determining intra-laminar fiber and matrix failures. This progressive failure methodology has been implemented in the Abaqus/Explicit and Abaqus/Standard finite element codes through user written subroutines "VUMAT" and "USDFLD" respectively. A 2-D finite element model is used for predicting the intra-laminar damages. Analysis results obtained from the Abaqus/Explicit and Abaqus/Standard code show good agreement with experimental results. The importance of modeling delamination in progressive failure analysis methodology is recognized for future studies. The use of an explicit integration dynamics code for simple specimen geometry and static loading establishes a foundation for future analyses where complex loading and nonlinear dynamic interactions of damage and structure will necessitate it.
Compression of computer generated phase-shifting hologram sequence using AVC and HEVC
NASA Astrophysics Data System (ADS)
Xing, Yafei; Pesquet-Popescu, Béatrice; Dufaux, Frederic
2013-09-01
With the capability of achieving twice the compression ratio of Advanced Video Coding (AVC) with similar reconstruction quality, High Efficiency Video Coding (HEVC) is expected to become the newleading technique of video coding. In order to reduce the storage and transmission burden of digital holograms, in this paper we propose to use HEVC for compressing the phase-shifting digital hologram sequences (PSDHS). By simulating phase-shifting digital holography (PSDH) interferometry, interference patterns between illuminated three dimensional( 3D) virtual objects and the stepwise phase changed reference wave are generated as digital holograms. The hologram sequences are obtained by the movement of the virtual objects and compressed by AVC and HEVC. The experimental results show that AVC and HEVC are efficient to compress PSDHS, with HEVC giving better performance. Good compression rate and reconstruction quality can be obtained with bitrate above 15000kbps.
NASA Astrophysics Data System (ADS)
Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret
2003-12-01
A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.
Neutron skyshine from intense 14-MeV neutron source facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakamura, T.; Hayashi, K.; Takahashi, A.
1985-07-01
The dose distribution and the spectrum variation of neutrons due to the skyshine effect have been measured with the high-efficiency rem counter, the multisphere spectrometer, and the NE-213 scintillator in the environment surrounding an intense 14-MeV neutron source facility. The dose distribution and the energy spectra of neutrons around the facility used as a skyshine source have also been measured to enable the absolute evaluation of the skyshine effect. The skyshine effect was analyzed by two multigroup Monte Carlo codes, NIMSAC and MMCR-2, by two discrete ordinates S /sub n/ codes, ANISN and DOT3.5, and by the shield structure designmore » code for skyshine, SKYSHINE-II. The calculated results show good agreement with the measured results in absolute values. These experimental results should be useful as benchmark data for shyshine analysis and for shielding design of fusion facilities.« less
NASA Astrophysics Data System (ADS)
Nazir, R. T.; Bari, M. A.; Bilal, M.; Sardar, S.; Nasim, M. H.; Salahuddin, M.
2017-02-01
We performed R-matrix calculations for photoionization cross sections of the two ground state configuration 3s23p5 (^2P^o3/2,1/2) levels and 12 excited states of Ni XII using relativistic Dirac Atomic R-matrix Codes (DARC) across the photon energy range between the ionizations thresholds of the corresponding states and well above the thresholds of the last level of the Ni XIII target ion. Generally, a good agreement is obtained between our results and the earlier theoretical photoionization cross sections. Moreover, we have used two independent fully relativistic GRASP and FAC codes to calculate fine-structure energy levels, wavelengths, oscillator strengths, transitions rates among the lowest 48 levels belonging to the configuration (3s23p4, 3s3p5, 3p6, 3s23p33d) in Ni XIII. Additionally, radiative lifetimes of all the excited states of Ni XIII are presented. Our results of the atomic structure of Ni XIII show good agreement with other theoretical and experimental results available in the literature. A good agreement is found between our calculated lifetimes and the experimental ones. Our present results are useful for plasma diagnostic of fusion and astrophysical plasmas.
Scan-Line Methods in Spatial Data Systems
1990-09-04
algorithms in detail to show some of the implementation issues. Data Compression Storage and transmission times can be reduced by using compression ...goes through the data . Luckily, there are good one-directional compression algorithms , such as run-length coding 13 in which each scan line can be...independently compressed . These are the algorithms to use in a parallel scan-line system. Data compression is usually only used for long-term storage of
High frequency scattering from a thin lossless dielectric slab. M.S. Thesis
NASA Technical Reports Server (NTRS)
Burgener, K. W.
1979-01-01
A solution for scattering from a thin dielectric slab is developed based on geometrical optics and the geometrical theory of diffraction with the intention of developing a model for a windshield of a small private aircraft for incorporation in an aircraft antenna code. Results of the theory are compared with experimental measurements and moment method calculations showing good agreement. Application of the solution is also addressed.
Low-Density Parity-Check (LDPC) Codes Constructed from Protographs
NASA Astrophysics Data System (ADS)
Thorpe, J.
2003-08-01
We introduce a new class of low-density parity-check (LDPC) codes constructed from a template called a protograph. The protograph serves as a blueprint for constructing LDPC codes of arbitrary size whose performance can be predicted by analyzing the protograph. We apply standard density evolution techniques to predict the performance of large protograph codes. Finally, we use a randomized search algorithm to find good protographs.
Building Energy Codes: Policy Overview and Good Practices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cox, Sadie
2016-02-19
Globally, 32% of total final energy consumption is attributed to the building sector. To reduce energy consumption, energy codes set minimum energy efficiency standards for the building sector. With effective implementation, building energy codes can support energy cost savings and complementary benefits associated with electricity reliability, air quality improvement, greenhouse gas emission reduction, increased comfort, and economic and social development. This policy brief seeks to support building code policymakers and implementers in designing effective building code programs.
A Study of Failure in Small Pressurized Cylindrical Shells Containing a Crack
NASA Technical Reports Server (NTRS)
Barwell, Craig A.; Eber, Lorenz; Fyfe, Ian M.
1998-01-01
The deformation in the vicinity of axial cracks in thin pressurized cylinders is examined using small experimental The deformation in the vicinity of axial cracks in thin pressurized cylinders is examined using small experimental models. The loading applied was either symmetric or unsymmetric about the crack plane, the latter being caused by structural constraints such as stringers. The objective was two fold - one, to provide the experimental results which will allow computer modeling techniques to be evaluated for deformations that are significantly different from that experienced by flat plates, and the other to examine the deformations and conditions associated with the onset of crack kinking which often precedes crack curving. The stresses which control crack growth in a cylindrical geometry depend on conditions introduced by the axial bulging, which is an integral part of this type of failure. For the symmetric geometry, both the hoop and radial strain just ahead off the crack, r = a, were measured and these results compared with those obtained from a variety of structural analysis codes, in particular STAGS [1], ABAQUS and ANSYS. In addition to these measurements, the pressures at the onset of stable and unstable crack growth were obtained and the corresponding crack deformations measured as the pressures were increased to failure. For the unsymmetric cases, measurements were taken of the crack kinking angle, and the displacements in the vicinity of the crack. In general, the strains ahead of the crack showed good agreement between the three computer codes and between the codes and the experiments. In the case of crack behavior, it was determined that modeling stable tearing with a crack-tip opening displacement fracture criterion could be successfully combined with the finite-element analysis techniques as used in structural analysis codes. The analytic results obtained in this study were very compatible with the experimental observations of crack growth. Measured crack kinking angles also showed good agreement with theories based on the maximum principle stress criterion.
A CellML simulation compiler and code generator using ODE solving schemes
2012-01-01
Models written in description languages such as CellML are becoming a popular solution to the handling of complex cellular physiological models in biological function simulations. However, in order to fully simulate a model, boundary conditions and ordinary differential equation (ODE) solving schemes have to be combined with it. Though boundary conditions can be described in CellML, it is difficult to explicitly specify ODE solving schemes using existing tools. In this study, we define an ODE solving scheme description language-based on XML and propose a code generation system for biological function simulations. In the proposed system, biological simulation programs using various ODE solving schemes can be easily generated. We designed a two-stage approach where the system generates the equation set associating the physiological model variable values at a certain time t with values at t + Δt in the first stage. The second stage generates the simulation code for the model. This approach enables the flexible construction of code generation modules that can support complex sets of formulas. We evaluate the relationship between models and their calculation accuracies by simulating complex biological models using various ODE solving schemes. Using the FHN model simulation, results showed good qualitative and quantitative correspondence with the theoretical predictions. Results for the Luo-Rudy 1991 model showed that only first order precision was achieved. In addition, running the generated code in parallel on a GPU made it possible to speed up the calculation time by a factor of 50. The CellML Compiler source code is available for download at http://sourceforge.net/projects/cellmlcompiler. PMID:23083065
Adaptive Wavelet Coding Applied in a Wireless Control System.
Gama, Felipe O S; Silveira, Luiz F Q; Salazar, Andrés O
2017-12-13
Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER) versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.
Talking about Code: Integrating Pedagogical Code Reviews into Early Computing Courses
ERIC Educational Resources Information Center
Hundhausen, Christopher D.; Agrawal, Anukrati; Agarwal, Pawan
2013-01-01
Given the increasing importance of soft skills in the computing profession, there is good reason to provide students withmore opportunities to learn and practice those skills in undergraduate computing courses. Toward that end, we have developed an active learning approach for computing education called the "Pedagogical Code Review"…
Code Parallelization with CAPO: A User Manual
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry; Biegel, Bryan (Technical Monitor)
2001-01-01
A software tool has been developed to assist the parallelization of scientific codes. This tool, CAPO, extends an existing parallelization toolkit, CAPTools developed at the University of Greenwich, to generate OpenMP parallel codes for shared memory architectures. This is an interactive toolkit to transform a serial Fortran application code to an equivalent parallel version of the software - in a small fraction of the time normally required for a manual parallelization. We first discuss the way in which loop types are categorized and how efficient OpenMP directives can be defined and inserted into the existing code using the in-depth interprocedural analysis. The use of the toolkit on a number of application codes ranging from benchmark to real-world application codes is presented. This will demonstrate the great potential of using the toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of processors. The second part of the document gives references to the parameters and the graphic user interface implemented in the toolkit. Finally a set of tutorials is included for hands-on experiences with this toolkit.
Gilmore-Bykovskyi, Andrea L.
2015-01-01
Mealtime behavioral symptoms are distressing and frequently interrupt eating for the individual experiencing them and others in the environment. In order to enable identification of potential antecedents to mealtime behavioral symptoms, a computer-assisted coding scheme was developed to measure caregiver person-centeredness and behavioral symptoms for nursing home residents with dementia during mealtime interactions. The purpose of this pilot study was to determine the acceptability and feasibility of procedures for video-capturing naturally-occurring mealtime interactions between caregivers and residents with dementia, to assess the feasibility, ease of use, and inter-observer reliability of the coding scheme, and to explore the clinical utility of the coding scheme. Trained observers coded 22 observations. Data collection procedures were feasible and acceptable to caregivers, residents and their legally authorized representatives. Overall, the coding scheme proved to be feasible, easy to execute and yielded good to very good inter-observer agreement following observer re-training. The coding scheme captured clinically relevant, modifiable antecedents to mealtime behavioral symptoms, but would be enhanced by the inclusion of measures for resident engagement and consolidation of items for measuring caregiver person-centeredness that co-occurred and were difficult for observers to distinguish. PMID:25784080
Recognition of an obstacle in a flow using artificial neural networks.
Carrillo, Mauricio; Que, Ulices; González, José A; López, Carlos
2017-08-01
In this work a series of artificial neural networks (ANNs) has been developed with the capacity to estimate the size and location of an obstacle obstructing the flow in a pipe. The ANNs learn the size and location of the obstacle by reading the profiles of the dynamic pressure q or the x component of the velocity v_{x} of the fluid at a certain distance from the obstacle. Data to train the ANN were generated using numerical simulations with a two-dimensional lattice Boltzmann code. We analyzed various cases varying both the diameter and the position of the obstacle on the y axis, obtaining good estimations using the R^{2} coefficient for the cases under study. Although the ANN showed problems with the classification of very small obstacles, the general results show a very good capacity for prediction.
Experimental and computational data from a small rocket exhaust diffuser
NASA Astrophysics Data System (ADS)
Stephens, Samuel E.
1993-06-01
The Diagnostics Testbed Facility (DTF) at the NASA Stennis Space Center in Mississippi is a versatile facility that is used primarily to aid in the development of nonintrusive diagnostics for liquid rocket engine testing. The DTF consists of a fixed, 1200 lbf thrust, pressure fed, liquid oxygen/gaseous hydrogen rocket engine, and associated support systems. An exhaust diffuser has been fabricated and installed to provide subatmospheric pressures at the exit of the engine. The diffuser aerodynamic design was calculated prior to fabrication using the PARC Navier-Stokes computational fluid dynamics code. The diffuser was then fabricated and tested at the DTF. Experimental data from these tests were acquired to determine the operational characteristics of the system and to correlate the actual and predicted flow fields. The results show that a good engineering approximation of overall diffuser performance can be made using the PARC Navier-Stokes code and a simplified geometry. Correlations between actual and predicted cell pressure and initial plume expansion in the diffuser are good; however, the wall pressure profiles do not correlate as well with the experimental data.
Computational Aerodynamic Simulations of a Spacecraft Cabin Ventilation Fan Design
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.
2010-01-01
Quieter working environments for astronauts are needed if future long-duration space exploration missions are to be safe and productive. Ventilation and payload cooling fans are known to be dominant sources of noise, with the International Space Station being a good case in point. To address this issue cost effectively, early attention to fan design, selection, and installation has been recommended, leading to an effort by NASA to examine the potential for small-fan noise reduction by improving fan aerodynamic design. As a preliminary part of that effort, the aerodynamics of a cabin ventilation fan designed by Hamilton Sundstrand has been simulated using computational fluid dynamics codes, and the computed solutions analyzed to quantify various aspects of the fan aerodynamics and performance. Four simulations were performed at the design rotational speed: two at the design flow rate and two at off-design flow rates. Following a brief discussion of the computational codes, various aerodynamic- and performance-related quantities derived from the computed flow fields are presented along with relevant flow field details. The results show that the computed fan performance is in generally good agreement with stated design goals.
Evaluation of the entropy consistent euler flux on 1D and 2D test problems
NASA Astrophysics Data System (ADS)
Roslan, Nur Khairunnisa Hanisah; Ismail, Farzad
2012-06-01
Perhaps most CFD simulations may yield good predictions of pressure and velocity when compared to experimental data. Unfortunately, these results will most likely not adhere to the second law of thermodynamics hence comprising the authenticity of predicted data. Currently, the test of a good CFD code is to check how much entropy is generated in a smooth flow and hope that the numerical entropy produced is of the correct sign when a shock is encountered. Herein, a shock capturing code written in C++ based on a recent entropy consistent Euler flux is developed to simulate 1D and 2D flows. Unlike other finite volume schemes in commercial CFD code, this entropy consistent flux (EC) function precisely satisfies the discrete second law of thermodynamics. This EC flux has an entropy-conserved part, preserving entropy for smooth flows and a numerical diffusion part that will accurately produce the proper amount of entropy, consistent with the second law. Several numerical simulations of the entropy consistent flux have been tested on two dimensional test cases. The first case is a Mach 3 flow over a forward facing step. The second case is a flow over a NACA 0012 airfoil while the third case is a hypersonic flow passing over a 2D cylinder. Local flow quantities such as velocity and pressure are analyzed and then compared with mainly the Roe flux. The results herein show that the EC flux does not capture the unphysical rarefaction shock unlike the Roe-flux and does not easily succumb to the carbuncle phenomenon. In addition, the EC flux maintains good performance in cases where the Roe flux is known to be superior.
HART-II Acoustic Predictions using a Coupled CFD/CSD Method
NASA Technical Reports Server (NTRS)
Boyd, D. Douglas, Jr.
2009-01-01
This paper documents results to date from the Rotorcraft Acoustic Characterization and Mitigation activity under the NASA Subsonic Rotary Wing Project. The primary goal of this activity is to develop a NASA rotorcraft impulsive noise prediction capability which uses first principles fluid dynamics and structural dynamics. During this effort, elastic blade motion and co-processing capabilities have been included in a recent version of the computational fluid dynamics code (CFD). The CFD code is loosely coupled to computational structural dynamics (CSD) code using new interface codes. The CFD/CSD coupled solution is then used to compute impulsive noise on a plane under the rotor using the Ffowcs Williams-Hawkings solver. This code system is then applied to a range of cases from the Higher Harmonic Aeroacoustic Rotor Test II (HART-II) experiment. For all cases presented, the full experimental configuration (i.e., rotor and wind tunnel sting mount) are used in the coupled CFD/CSD solutions. Results show good correlation between measured and predicted loading and loading time derivative at the only measured radial station. A contributing factor for a typically seen loading mean-value offset between measured data and predictions data is examined. Impulsive noise predictions on the measured microphone plane under the rotor compare favorably with measured mid-frequency noise for all cases. Flow visualization of the BL and MN cases shows that vortex structures generated in the prediction method are consist with measurements. Future application of the prediction method is discussed.
Aerodynamic and heat transfer analysis of the low aspect ratio turbine
NASA Astrophysics Data System (ADS)
Sharma, O. P.; Nguyen, P.; Ni, R. H.; Rhie, C. M.; White, J. A.
1987-06-01
The available two- and three-dimensional codes are used to estimate external heat loads and aerodynamic characteristics of a highly loaded turbine stage in order to demonstrate state-of-the-art methodologies in turbine design. By using data for a low aspect ratio turbine, it is found that a three-dimensional multistage Euler code gives good averall predictions for the turbine stage, yielding good estimates of the stage pressure ratio, mass flow, and exit gas angles. The nozzle vane loading distribution is well predicted by both the three-dimensional multistage Euler and three-dimensional Navier-Stokes codes. The vane airfoil surface Stanton number distributions, however, are underpredicted by both two- and three-dimensional boundary value analysis.
Assessment of the TRACE Reactor Analysis Code Against Selected PANDA Transient Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavisca, M.; Ghaderi, M.; Khatib-Rahbar, M.
2006-07-01
The TRACE (TRAC/RELAP Advanced Computational Engine) code is an advanced, best-estimate thermal-hydraulic program intended to simulate the transient behavior of light-water reactor systems, using a two-fluid (steam and water, with non-condensable gas), seven-equation representation of the conservation equations and flow-regime dependent constitutive relations in a component-based model with one-, two-, or three-dimensional elements, as well as solid heat structures and logical elements for the control system. The U.S. Nuclear Regulatory Commission is currently supporting the development of the TRACE code and its assessment against a variety of experimental data pertinent to existing and evolutionary reactor designs. This paper presents themore » results of TRACE post-test prediction of P-series of experiments (i.e., tests comprising the ISP-42 blind and open phases) conducted at the PANDA large-scale test facility in 1990's. These results show reasonable agreement with the reported test results, indicating good performance of the code and relevant underlying thermal-hydraulic and heat transfer models. (authors)« less
The Sensitivity of Coded Mask Telescopes
NASA Technical Reports Server (NTRS)
Skinner, Gerald K.
2008-01-01
Simple formulae are often used to estimate the sensitivity of coded mask X-ray or gamma-ray telescopes, but t,hese are strictly only applicable if a number of basic assumptions are met. Complications arise, for example, if a grid structure is used to support the mask elements, if the detector spatial resolution is not good enough to completely resolve all the detail in the shadow of the mask or if any of a number of other simplifying conditions are not fulfilled. We derive more general expressions for the Poisson-noise-limited sensitivity of astronomical telescopes using the coded mask technique, noting explicitly in what circumstances they are applicable. The emphasis is on using nomenclature and techniques that result in simple and revealing results. Where no convenient expression is available a procedure is given which allows the calculation of the sensitivity. We consider certain aspects of the optimisation of the design of a coded mask telescope and show that when the detector spatial resolution and the mask to detector separation are fixed, the best source location accuracy is obtained when the mask elements are equal in size to the detector pixels.
NASA Technical Reports Server (NTRS)
Flores, J.; Gundy, K.; Gundy, K.; Gundy, K.; Gundy, K.; Gundy, K.
1986-01-01
A fast diagonalized Beam-Warming algorithm is coupled with a zonal approach to solve the three-dimensional Euler/Navier-Stokes equations. The computer code, called Transonic Navier-Stokes (TNS), uses a total of four zones for wing configurations (or can be extended to complete aircraft configurations by adding zones). In the inner blocks near the wing surface, the thin-layer Navier-Stokes equations are solved, while in the outer two blocks the Euler equations are solved. The diagonal algorithm yields a speedup of as much as a factor of 40 over the original algorithm/zonal method code. The TNS code, in addition, has the capability to model wind tunnel walls. Transonic viscous solutions are obtained on a 150,000-point mesh for a NACA 0012 wing. A three-order-of-magnitude drop in the L2-norm of the residual requires approximately 500 iterations, which takes about 45 min of CPU time on a Cray-XMP processor. Simulations are also conducted for a different geometrical wing called WING C. All cases show good agreement with experimental data.
Particle model of a cylindrical inductively coupled ion source
NASA Astrophysics Data System (ADS)
Ippolito, N. D.; Taccogna, F.; Minelli, P.; Cavenago, M.; Veltri, P.
2017-08-01
In spite of the wide use of RF sources, a complete understanding of the mechanisms regulating the RF-coupling of the plasma is still lacking so self-consistent simulations of the involved physics are highly desirable. For this reason we are developing a 2.5D fully kinetic Particle-In-Cell Monte-Carlo-Collision (PIC-MCC) model of a cylindrical ICP-RF source, keeping the time step of the simulation small enough to resolve the plasma frequency scale. The grid cell dimension is now about seven times larger than the average Debye length, because of the large computational demand of the code. It will be scaled down in the next phase of the development of the code. The filling gas is Xenon, in order to minimize the time lost by the MCC collision module in the first stage of development of the code. The results presented here are preliminary, with the code already showing a good robustness. The final goal will be the modeling of the NIO1 (Negative Ion Optimization phase 1) source, operating in Padua at Consorzio RFX.
[Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].
Furuta, Takuya; Sato, Tatsuhiko
2015-01-01
Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.
Do humans make good decisions?
Summerfield, Christopher; Tsetsos, Konstantinos
2014-01-01
Human performance on perceptual classification tasks approaches that of an ideal observer, but economic decisions are often inconsistent and intransitive, with preferences reversing according to the local context. We discuss the view that suboptimal choices may result from the efficient coding of decision-relevant information, a strategy that allows expected inputs to be processed with higher gain than unexpected inputs. Efficient coding leads to ‘robust’ decisions that depart from optimality but maximise the information transmitted by a limited-capacity system in a rapidly-changing world. We review recent work showing that when perceptual environments are variable or volatile, perceptual decisions exhibit the same suboptimal context-dependence as economic choices, and propose a general computational framework that accounts for findings across the two domains. PMID:25488076
Led, Santiago; Azpilicueta, Leire; Aguirre, Erik; de Espronceda, Miguel Martínez; Serrano, Luis; Falcone, Francisco
2013-01-01
In this work, a novel ambulatory ECG monitoring device developed in-house called HOLTIN is analyzed when operating in complex indoor scenarios. The HOLTIN system is described, from the technological platform level to its functional model. In addition, by using in-house 3D ray launching simulation code, the wireless channel behavior, which enables ubiquitous operation, is performed. The effect of human body presence is taken into account by a novel simplified model embedded within the 3D Ray Launching code. Simulation as well as measurement results are presented, showing good agreement. These results may aid in the adequate deployment of this novel device to automate conventional medical processes, increasing the coverage radius and optimizing energy consumption. PMID:23584122
Story Goodness in Adolescents with Autism Spectrum Disorder (ASD) and in Optimal Outcomes from ASD
ERIC Educational Resources Information Center
Canfield, Allison R.; Eigsti, Inge-Marie; de Marchena, Ashley; Fein, Deborah
2016-01-01
Purpose: This study examined narrative quality of adolescents with autism spectrum disorder (ASD) using a well-studied "story goodness" coding system. Method: Narrative samples were analyzed for distinct aspects of story goodness and rated by naïve readers on dimensions of story goodness, accuracy, cohesiveness, and oddness. Adolescents…
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1993-01-01
The results included in the Ph.D. dissertation of Dr. Fu Quan Wang, who was supported by the grant as a Research Assistant from January 1989 through December 1992 are discussed. The sections contain a brief summary of the important aspects of this dissertation, which include: (1) erasurefree sequential decoding of trellis codes; (2) probabilistic construction of trellis codes; (3) construction of robustly good trellis codes; and (4) the separability of shaping and coding.
Programming (Tips) for Physicists & Engineers
Ozcan, Erkcan
2018-02-19
Programming for today's physicists and engineers. Work environment: today's astroparticle, accelerator experiments and information industry rely on large collaborations. Need more than ever: code sharing/resuse, code building--framework integration, documentation and good visualization, working remotely, not reinventing the wheel.
Programming (Tips) for Physicists & Engineers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozcan, Erkcan
2010-07-13
Programming for today's physicists and engineers. Work environment: today's astroparticle, accelerator experiments and information industry rely on large collaborations. Need more than ever: code sharing/resuse, code building--framework integration, documentation and good visualization, working remotely, not reinventing the wheel.
... coded AQI category ranging from 0, which is green or “good,” up to 300, which is purple ... category represents a different level of health concern: Green (Good). Air pollution poses little or no risk. ...
Non-Ideal Detonation Properties of Ammonium Nitrate and Activated Carbon Mixtures
NASA Astrophysics Data System (ADS)
Miyake, Atsumi; Echigoya, Hiroshi; Kobayashi, Hidefumi; Ogawa, Terushige; Katoh, Katsumi; Kubota, Shiro; Wada, Yuji; Ogata, Yuji
To obtain a better understanding of detonation properties of ammonium nitrate (AN) and activated carbon (AC) mixtures, steel tube tests with several diameters were carried out for various compositions of powdered AN and AC mixtures and the influence of the charge diameter on the detonation velocity was investigated. The results showed that the detonation velocity increased with the increase of the charge diameter. The experimentally observed values were far below the theoretically predicted values made by the thermodynamic CHEETAH code and they showed so-called non-ideal detonation. The extrapolated detonation velocity of stoichiometric composition to the infinite diameter showed a good agreement with the theoretical value.
Coding for reliable satellite communications
NASA Technical Reports Server (NTRS)
Gaarder, N. T.; Lin, S.
1986-01-01
This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.
Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.
2011-01-01
We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.
Application of unsteady aeroelastic analysis techniques on the national aerospace plane
NASA Technical Reports Server (NTRS)
Pototzky, Anthony S.; Spain, Charles V.; Soistmann, David L.; Noll, Thomas E.
1988-01-01
A presentation provided at the Fourth National Aerospace Plane Technology Symposium held in Monterey, California, in February 1988 is discussed. The objective is to provide current results of ongoing investigations to develop a methodology for predicting the aerothermoelastic characteristics of NASP-type (hypersonic) flight vehicles. Several existing subsonic and supersonic unsteady aerodynamic codes applicable to the hypersonic class of flight vehicles that are generally available to the aerospace industry are described. These codes were evaluated by comparing calculated results with measured wind-tunnel aeroelastic data. The agreement was quite good in the subsonic speed range but showed mixed agreement in the supersonic range. In addition, a future endeavor to extend the aeroelastic analysis capability to hypersonic speeds is outlined. An investigation to identify the critical parameters affecting the aeroelastic characteristics of a hypersonic vehicle, to define and understand the various flutter mechanisms, and to develop trends for the important parameters using a simplified finite element model of the vehicle is summarized. This study showed the value of performing inexpensive and timely aeroelastic wind-tunnel tests to expand the experimental data base required for code validation using simple to complex models that are representative of the NASP configurations and root boundary conditions are discussed.
Reus, Astrid A; Reisinger, Kerstin; Downs, Thomas R; Carr, Gregory J; Zeller, Andreas; Corvi, Raffaella; Krul, Cyrille A M; Pfuhler, Stefan
2013-11-01
Reconstructed 3D human epidermal skin models are being used increasingly for safety testing of chemicals. Based on EpiDerm™ tissues, an assay was developed in which the tissues were topically exposed to test chemicals for 3h followed by cell isolation and assessment of DNA damage using the comet assay. Inter-laboratory reproducibility of the 3D skin comet assay was initially demonstrated using two model genotoxic carcinogens, methyl methane sulfonate (MMS) and 4-nitroquinoline-n-oxide, and the results showed good concordance among three different laboratories and with in vivo data. In Phase 2 of the project, intra- and inter-laboratory reproducibility was investigated with five coded compounds with different genotoxicity liability tested at three different laboratories. For the genotoxic carcinogens MMS and N-ethyl-N-nitrosourea, all laboratories reported a dose-related and statistically significant increase (P < 0.05) in DNA damage in every experiment. For the genotoxic carcinogen, 2,4-diaminotoluene, the overall result from all laboratories showed a smaller, but significant genotoxic response (P < 0.05). For cyclohexanone (CHN) (non-genotoxic in vitro and in vivo, and non-carcinogenic), an increase compared to the solvent control acetone was observed only in one laboratory. However, the response was not dose related and CHN was judged negative overall, as was p-nitrophenol (p-NP) (genotoxic in vitro but not in vivo and non-carcinogenic), which was the only compound showing clear cytotoxic effects. For p-NP, significant DNA damage generally occurred only at doses that were substantially cytotoxic (>30% cell loss), and the overall response was comparable in all laboratories despite some differences in doses tested. The results of the collaborative study for the coded compounds were generally reproducible among the laboratories involved and intra-laboratory reproducibility was also good. These data indicate that the comet assay in EpiDerm™ skin models is a promising model for the safety assessment of compounds with a dermal route of exposure.
Pfuhler, Stefan
2013-01-01
Reconstructed 3D human epidermal skin models are being used increasingly for safety testing of chemicals. Based on EpiDerm™ tissues, an assay was developed in which the tissues were topically exposed to test chemicals for 3h followed by cell isolation and assessment of DNA damage using the comet assay. Inter-laboratory reproducibility of the 3D skin comet assay was initially demonstrated using two model genotoxic carcinogens, methyl methane sulfonate (MMS) and 4-nitroquinoline-n-oxide, and the results showed good concordance among three different laboratories and with in vivo data. In Phase 2 of the project, intra- and inter-laboratory reproducibility was investigated with five coded compounds with different genotoxicity liability tested at three different laboratories. For the genotoxic carcinogens MMS and N-ethyl-N-nitrosourea, all laboratories reported a dose-related and statistically significant increase (P < 0.05) in DNA damage in every experiment. For the genotoxic carcinogen, 2,4-diaminotoluene, the overall result from all laboratories showed a smaller, but significant genotoxic response (P < 0.05). For cyclohexanone (CHN) (non-genotoxic in vitro and in vivo, and non-carcinogenic), an increase compared to the solvent control acetone was observed only in one laboratory. However, the response was not dose related and CHN was judged negative overall, as was p-nitrophenol (p-NP) (genotoxic in vitro but not in vivo and non-carcinogenic), which was the only compound showing clear cytotoxic effects. For p-NP, significant DNA damage generally occurred only at doses that were substantially cytotoxic (>30% cell loss), and the overall response was comparable in all laboratories despite some differences in doses tested. The results of the collaborative study for the coded compounds were generally reproducible among the laboratories involved and intra-laboratory reproducibility was also good. These data indicate that the comet assay in EpiDerm™ skin models is a promising model for the safety assessment of compounds with a dermal route of exposure. PMID:24150594
Code of Federal Regulations, 2011 CFR
2011-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CHEMICAL... testing procedures shall follow recognized and generally accepted good engineering practices. The...' recommendations, industry standards or codes, good engineering practices, and prior operating experience. ...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CHEMICAL... testing procedures shall follow recognized and generally accepted good engineering practices. The...' recommendations, industry standards or codes, good engineering practices, and prior operating experience. ...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CHEMICAL... testing procedures shall follow recognized and generally accepted good engineering practices. The...' recommendations, industry standards or codes, good engineering practices, and prior operating experience. ...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CHEMICAL... testing procedures shall follow recognized and generally accepted good engineering practices. The...' recommendations, industry standards or codes, good engineering practices, and prior operating experience. ...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CHEMICAL... testing procedures shall follow recognized and generally accepted good engineering practices. The...' recommendations, industry standards or codes, good engineering practices, and prior operating experience. ...
A robust coding scheme for packet video
NASA Technical Reports Server (NTRS)
Chen, Y. C.; Sayood, Khalid; Nelson, D. J.
1991-01-01
We present a layered packet video coding algorithm based on a progressive transmission scheme. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.
A robust coding scheme for packet video
NASA Technical Reports Server (NTRS)
Chen, Yun-Chung; Sayood, Khalid; Nelson, Don J.
1992-01-01
A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.
Binary weight distributions of some Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Pollara, F.; Arnold, S.
1992-01-01
The binary weight distributions of the (7,5) and (15,9) Reed-Solomon (RS) codes and their duals are computed using the MacWilliams identities. Several mappings of symbols to bits are considered and those offering the largest binary minimum distance are found. These results are then used to compute bounds on the soft-decoding performance of these codes in the presence of additive Gaussian noise. These bounds are useful for finding large binary block codes with good performance and for verifying the performance obtained by specific soft-coding algorithms presently under development.
NASA Astrophysics Data System (ADS)
Gladden, H. J.; Proctor, M. P.
A transient technique was used to measure heat transfer coefficients on stator airfoils in a high-temperature annular cascade at real engine conditions. The transient response of thin film thermocouples on the airfoil surface to step changes in the gas stream temperature was used to determine these coefficients. In addition, gardon gages and paired thermocouples were also utilized to measure heat flux on the airfoil pressure surface at steady state conditions. The tests were conducted at exit gas stream Reynolds numbers of one-half to 1.9 million based on true chord. The results from the transient technique show good comparison with the steady-state results in both trend and magnitude. In addition, comparison is made with the STAN5 boundary layer code and shows good comparison with the trends. However, the magnitude of the experimental data is consistently higher than the analysis.
NASA Technical Reports Server (NTRS)
Gladden, H. J.; Proctor, M. P.
1985-01-01
A transient technique was used to measure heat transfer coefficients on stator airfoils in a high-temperature annular cascade at real engine conditions. The transient response of thin film thermocouples on the airfoil surface to step changes in the gas stream temperature was used to determine these coefficients. In addition, gardon gages and paired thermocouples were also utilized to measure heat flux on the airfoil pressure surface at steady state conditions. The tests were conducted at exit gas stream Reynolds numbers of one-half to 1.9 million based on true chord. The results from the transient technique show good comparison with the steady-state results in both trend and magnitude. In addition, comparison is made with the STAN5 boundary layer code and shows good comparison with the trends. However, the magnitude of the experimental data is consistently higher than the analysis.
Reading Difficulties in Adult Deaf Readers of French: Phonological Codes, Not Guilty!
ERIC Educational Resources Information Center
Belanger, Nathalie N.; Baum, Shari R.; Mayberry, Rachel I.
2012-01-01
Deaf people often achieve low levels of reading skills. The hypothesis that the use of phonological codes is associated with good reading skills in deaf readers is not yet fully supported in the literature. We investigated skilled and less skilled adult deaf readers' use of orthographic and phonological codes in reading. Experiment 1 used a masked…
NASA Astrophysics Data System (ADS)
Insulander Björk, Klara; Kekkonen, Laura
2015-12-01
Thorium-plutonium Mixed OXide (Th-MOX) fuel is considered for use in light water reactors fuel due to some inherent benefits over conventional fuel types in terms of neutronic properties. The good material properties of ThO2 also suggest benefits in terms of thermal-mechanical fuel performance, but the use of Th-MOX fuel for commercial power production demands that its thermal-mechanical behavior can be accurately predicted using a well validated fuel performance code. Given the scant operational experience with Th-MOX fuel, no such code is available today. This article describes the first phase of the development of such a code, based on the well-established code FRAPCON 3.4, and in particular the correlations reviewed and chosen for the fuel material properties. The results of fuel temperature calculations with the code in its current state of development are shown and compared with data from a Th-MOX test irradiation campaign which is underway in the Halden research reactor. The results are good for fresh fuel, whereas experimental complications make it difficult to judge the adequacy of the code for simulations of irradiated fuel.
NASA Astrophysics Data System (ADS)
Pei, C.; Bieber, J. W.; Burger, R. A.; Clem, J.
2010-12-01
We present a detailed description of our newly developed stochastic approach for solving Parker's transport equation, which we believe is the first attempt to solve it with time dependence in 3-D, evolving from our 3-D steady state stochastic approach. Our formulation of this method is general and is valid for any type of heliospheric magnetic field, although we choose the standard Parker field as an example to illustrate the steps to calculate the transport of galactic cosmic rays. Our 3-D stochastic method is different from other stochastic approaches in the literature in several ways. For example, we employ spherical coordinates to integrate directly, which makes the code much more efficient by reducing coordinate transformations. What is more, the equivalence between our stochastic differential equations and Parker's transport equation is guaranteed by Ito's theorem in contrast to some other approaches. We generalize the technique for calculating particle flux based on the pseudoparticle trajectories for steady state solutions and for time-dependent solutions in 3-D. To validate our code, first we show that good agreement exists between solutions obtained by our steady state stochastic method and a traditional finite difference method. Then we show that good agreement also exists for our time-dependent method for an idealized and simplified heliosphere which has a Parker magnetic field and a simple initial condition for two different inner boundary conditions.
Hu, Junjie; Liu, Fei; Ju, Huangxian
2015-04-21
A peptide-encoded microplate was proposed for MALDI-TOF mass spectrometric (MS) analysis of protease activity. The peptide codes were designed to contain a coding region and the substrate of protease for enzymatic cleavage, respectively, and an internal standard method was proposed for the MS quantitation of the cleavage products of these peptide codes. Upon the cleavage reaction in the presence of target proteases, the coding regions were released from the microplate, which were directly quantitated by using corresponding peptides with one-amino acid difference as the internal standards. The coding region could be used as the unique "Protease ID" for the identification of corresponding protease, and the amount of the cleavage product was used for protease activity analysis. Using trypsin and chymotrypsin as the model proteases to verify the multiplex protease assay, the designed "Trypsin ID" and "Chymotrypsin ID" occurred at m/z 761.6 and 711.6. The logarithm value of the intensity ratio of "Protease ID" to internal standard was proportional to trypsin and chymotrypsin concentration in a range from 5.0 to 500 and 10 to 500 nM, respectively. The detection limits for trypsin and chymotrypsin were 2.3 and 5.2 nM, respectively. The peptide-encoded microplate showed good selectivity. This proposed method provided a powerful tool for convenient identification and activity analysis of multiplex proteases.
An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).
Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling
2018-04-17
Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.
Modelling of an Orthovoltage X-ray Therapy Unit with the EGSnrc Monte Carlo Package
NASA Astrophysics Data System (ADS)
Knöös, Tommy; Rosenschöld, Per Munck Af; Wieslander, Elinore
2007-06-01
Simulations with the EGSnrc code package of an orthovoltage x-ray machine have been performed. The BEAMnrc code was used to transport electrons, produce x-ray photons in the target and transport of these through the treatment machine down to the exit level of the applicator. Further transport in water or CT based phantoms was facilitated by the DOSXYZnrc code. Phase space files were scored with BEAMnrc and analysed regarding the energy spectra at the end of the applicator. Tuning of simulation parameters was based on the half-value layer quantity for the beams in either Al or Cu. Calculated depth dose and profile curves have been compared against measurements and show good agreement except at shallow depths. The MC model tested in this study can be used for various dosimetric studies as well as generating a library of typical treatment cases that can serve as both educational material and guidance in the clinical practice
Unfolding the neutron spectrum of a NE213 scintillator using artificial neural networks.
Sharghi Ido, A; Bonyadi, M R; Etaati, G R; Shahriari, M
2009-10-01
Artificial neural networks technology has been applied to unfold the neutron spectra from the pulse height distribution measured with NE213 liquid scintillator. Here, both the single and multi-layer perceptron neural network models have been implemented to unfold the neutron spectrum from an Am-Be neutron source. The activation function and the connectivity of the neurons have been investigated and the results have been analyzed in terms of the network's performance. The simulation results show that the neural network that utilizes the Satlins transfer function has the best performance. In addition, omitting the bias connection of the neurons improve the performance of the network. Also, the SCINFUL code is used for generating the response functions in the training phase of the process. Finally, the results of the neural network simulation have been compared with those of the FORIST unfolding code for both (241)Am-Be and (252)Cf neutron sources. The results of neural network are in good agreement with FORIST code.
NASA Technical Reports Server (NTRS)
Gronoff, Guillaume; Norman, Ryan B.; Mertens, Christopher J.
2014-01-01
The ability to evaluate the cosmic ray environment at Mars is of interest for future manned exploration. To support exploration, tools must be developed to accurately access the radiation environment in both free space and on planetary surfaces. The primary tool NASA uses to quantify radiation exposure behind shielding materials is the space radiation transport code, HZETRN. In order to build confidence in HZETRN, code benchmarking against Monte Carlo radiation transport codes is often used. This work compares the dose calculations at Mars by HZETRN and the Geant4 application Planetocosmics. The dose at ground and the energy deposited in the atmosphere by galactic cosmic ray protons and alpha particles has been calculated for the Curiosity landing conditions. In addition, this work has considered Solar Energetic Particle events, allowing for the comparison of varying input radiation environments. The results for protons and alpha particles show very good agreement between HZETRN and Planetocosmics.
GPU accelerated manifold correction method for spinning compact binaries
NASA Astrophysics Data System (ADS)
Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying
2018-04-01
The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.
Low-complexity transcoding algorithm from H.264/AVC to SVC using data mining
NASA Astrophysics Data System (ADS)
Garrido-Cantos, Rosario; De Cock, Jan; Martínez, Jose Luis; Van Leuven, Sebastian; Cuenca, Pedro; Garrido, Antonio
2013-12-01
Nowadays, networks and terminals with diverse characteristics of bandwidth and capabilities coexist. To ensure a good quality of experience, this diverse environment demands adaptability of the video stream. In general, video contents are compressed to save storage capacity and to reduce the bandwidth required for its transmission. Therefore, if these compressed video streams were compressed using scalable video coding schemes, they would be able to adapt to those heterogeneous networks and a wide range of terminals. Since the majority of the multimedia contents are compressed using H.264/AVC, they cannot benefit from that scalability. This paper proposes a low-complexity algorithm to convert an H.264/AVC bitstream without scalability to scalable bitstreams with temporal scalability in baseline and main profiles by accelerating the mode decision task of the scalable video coding encoding stage using machine learning tools. The results show that when our technique is applied, the complexity is reduced by 87% while maintaining coding efficiency.
Concurrent design of an RTP chamber and advanced control system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spence, P.; Schaper, C.; Kermani, A.
1995-12-31
A concurrent-engineering approach is applied to the development of an axisymmetric rapid-thermal-processing (RTP) reactor and its associated temperature controller. Using a detailed finite-element thermal model as a surrogate for actual hardware, the authors have developed and tested a multi-input multi-output (MIMO) controller. Closed-loop simulations are performed by linking the control algorithm with the finite-element code. Simulations show that good temperature uniformity is maintained on the wafer during both steady and transient conditions. A numerical study shows the effect of ramp rate, feedback gain, sensor placement, and wafer-emissivity patterns on system performance.
Aerodynamic simulation on massively parallel systems
NASA Technical Reports Server (NTRS)
Haeuser, Jochem; Simon, Horst D.
1992-01-01
This paper briefly addresses the computational requirements for the analysis of complete configurations of aircraft and spacecraft currently under design to be used for advanced transportation in commercial applications as well as in space flight. The discussion clearly shows that massively parallel systems are the only alternative which is both cost effective and on the other hand can provide the necessary TeraFlops, needed to satisfy the narrow design margins of modern vehicles. It is assumed that the solution of the governing physical equations, i.e., the Navier-Stokes equations which may be complemented by chemistry and turbulence models, is done on multiblock grids. This technique is situated between the fully structured approach of classical boundary fitted grids and the fully unstructured tetrahedra grids. A fully structured grid best represents the flow physics, while the unstructured grid gives best geometrical flexibility. The multiblock grid employed is structured within a block, but completely unstructured on the block level. While a completely unstructured grid is not straightforward to parallelize, the above mentioned multiblock grid is inherently parallel, in particular for multiple instruction multiple datastream (MIMD) machines. In this paper guidelines are provided for setting up or modifying an existing sequential code so that a direct parallelization on a massively parallel system is possible. Results are presented for three parallel systems, namely the Intel hypercube, the Ncube hypercube, and the FPS 500 system. Some preliminary results for an 8K CM2 machine will also be mentioned. The code run is the two dimensional grid generation module of Grid, which is a general two dimensional and three dimensional grid generation code for complex geometries. A system of nonlinear Poisson equations is solved. This code is also a good testcase for complex fluid dynamics codes, since the same datastructures are used. All systems provided good speedups, but message passing MIMD systems seem to be best suited for large miltiblock applications.
Modeling radiation belt dynamics using a 3-D layer method code
NASA Astrophysics Data System (ADS)
Wang, C.; Ma, Q.; Tao, X.; Zhang, Y.; Teng, S.; Albert, J. M.; Chan, A. A.; Li, W.; Ni, B.; Lu, Q.; Wang, S.
2017-08-01
A new 3-D diffusion code using a recently published layer method has been developed to analyze radiation belt electron dynamics. The code guarantees the positivity of the solution even when mixed diffusion terms are included. Unlike most of the previous codes, our 3-D code is developed directly in equatorial pitch angle (α0), momentum (p), and L shell coordinates; this eliminates the need to transform back and forth between (α0,p) coordinates and adiabatic invariant coordinates. Using (α0,p,L) is also convenient for direct comparison with satellite data. The new code has been validated by various numerical tests, and we apply the 3-D code to model the rapid electron flux enhancement following the geomagnetic storm on 17 March 2013, which is one of the Geospace Environment Modeling Focus Group challenge events. An event-specific global chorus wave model, an AL-dependent statistical plasmaspheric hiss wave model, and a recently published radial diffusion coefficient formula from Time History of Events and Macroscale Interactions during Substorms (THEMIS) statistics are used. The simulation results show good agreement with satellite observations, in general, supporting the scenario that the rapid enhancement of radiation belt electron flux for this event results from an increased level of the seed population by radial diffusion, with subsequent acceleration by chorus waves. Our results prove that the layer method can be readily used to model global radiation belt dynamics in three dimensions.
Counterpropagating Radiative Shock Experiments on the Orion Laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki-Vidal, F.; Clayson, T.; Stehlé, C.
We present new experiments to study the formation of radiative shocks and the interaction between two counterpropagating radiative shocks. The experiments are performed at the Orion laser facility, which is used to drive shocks in xenon inside large aspect ratio gas cells. The collision between the two shocks and their respective radiative precursors, combined with the formation of inherently three-dimensional shocks, provides a novel platform particularly suited for the benchmarking of numerical codes. The dynamics of the shocks before and after the collision are investigated using point-projection x-ray backlighting while, simultaneously, the electron density in the radiative precursor was measuredmore » via optical laser interferometry. Modeling of the experiments using the 2D radiation hydrodynamic codes nym and petra shows very good agreement with the experimental results.« less
Computation of H2/air reacting flowfields in drag-reduction external combustion
NASA Technical Reports Server (NTRS)
Lai, H. T.
1992-01-01
Numerical simulation and analysis of the solution are presented for a laminar reacting flowfield of air and hydrogen in the case of external combustion employed to reduce base drag in hypersonic vehicles operating at transonic speeds. The flowfield consists of a transonic air stream at a Mach number of 1.26 and a sonic transverse hydrogen injection along a row of 26 orifices. Self-sustained combustion is computed over an expansion ramp downstream of the injection and a flameholder, using the recently developed RPLUS code. Measured data is available only for surface pressure distributions and is used for validation of the code in practical 3D reacting flowfields. Pressure comparison shows generally good agreements, and the main effects of combustion are also qualitatively consistent with experiment.
Counterpropagating Radiative Shock Experiments on the Orion Laser.
Suzuki-Vidal, F; Clayson, T; Stehlé, C; Swadling, G F; Foster, J M; Skidmore, J; Graham, P; Burdiak, G C; Lebedev, S V; Chaulagain, U; Singh, R L; Gumbrell, E T; Patankar, S; Spindloe, C; Larour, J; Kozlova, M; Rodriguez, R; Gil, J M; Espinosa, G; Velarde, P; Danson, C
2017-08-04
We present new experiments to study the formation of radiative shocks and the interaction between two counterpropagating radiative shocks. The experiments are performed at the Orion laser facility, which is used to drive shocks in xenon inside large aspect ratio gas cells. The collision between the two shocks and their respective radiative precursors, combined with the formation of inherently three-dimensional shocks, provides a novel platform particularly suited for the benchmarking of numerical codes. The dynamics of the shocks before and after the collision are investigated using point-projection x-ray backlighting while, simultaneously, the electron density in the radiative precursor was measured via optical laser interferometry. Modeling of the experiments using the 2D radiation hydrodynamic codes nym and petra shows very good agreement with the experimental results.
NASA Technical Reports Server (NTRS)
Anderson, B. H.; Benson, T. J.
1983-01-01
A supersonic three-dimensional viscous forward-marching computer design code called PEPSIS is used to obtain a numerical solution of the three-dimensional problem of the interaction of a glancing sidewall oblique shock wave and a turbulent boundary layer. Very good results are obtained for a test case that was run to investigate the use of the wall-function boundary-condition approximation for a highly complex three-dimensional shock-boundary layer interaction. Two additional test cases (coarse mesh and medium mesh) are run to examine the question of near-wall resolution when no-slip boundary conditions are applied. A comparison with experimental data shows that the PEPSIS code gives excellent results in general and is practical for three-dimensional supersonic inlet calculations.
NASA Technical Reports Server (NTRS)
Anderson, B. H.; Benson, T. J.
1983-01-01
A supersonic three-dimensional viscous forward-marching computer design code called PEPSIS is used to obtain a numerical solution of the three-dimensional problem of the interaction of a glancing sidewall oblique shock wave and a turbulent boundary layer. Very good results are obtained for a test case that was run to investigate the use of the wall-function boundary-condition approximation for a highly complex three-dimensional shock-boundary layer interaction. Two additional test cases (coarse mesh and medium mesh) are run to examine the question of near-wall resolution when no-slip boundary conditions are applied. A comparison with experimental data shows that the PEPSIS code gives excellent results in general and is practical for three-dimensional supersonic inlet calculations.
Calculations vs. measurements of remnant dose rates for SNS spent structures
NASA Astrophysics Data System (ADS)
Popova, I. I.; Gallmeier, F. X.; Trotter, S.; Dayton, M.
2018-06-01
Residual dose rate measurements were conducted on target vessel #13 and proton beam window #5 after extraction from their service locations. These measurements were used to verify calculation methods of radionuclide inventory assessment that are typically performed for nuclear waste characterization and transportation of these structures. Neutronics analyses for predicting residual dose rates were carried out using the transport code MCNPX and the transmutation code CINDER90. For transport analyses complex and rigorous geometry model of the structures and their surrounding are applied. The neutronics analyses were carried out using Bertini and CEM high energy physics models for simulating particles interaction. Obtained preliminary calculational results were analysed and compared to the measured dose rates and overall are showing good agreement with in 40% in average.
Counterpropagating Radiative Shock Experiments on the Orion Laser
Suzuki-Vidal, F.; Clayson, T.; Stehlé, C.; ...
2017-08-02
We present new experiments to study the formation of radiative shocks and the interaction between two counterpropagating radiative shocks. The experiments are performed at the Orion laser facility, which is used to drive shocks in xenon inside large aspect ratio gas cells. The collision between the two shocks and their respective radiative precursors, combined with the formation of inherently three-dimensional shocks, provides a novel platform particularly suited for the benchmarking of numerical codes. The dynamics of the shocks before and after the collision are investigated using point-projection x-ray backlighting while, simultaneously, the electron density in the radiative precursor was measuredmore » via optical laser interferometry. Modeling of the experiments using the 2D radiation hydrodynamic codes nym and petra shows very good agreement with the experimental results.« less
Calculations vs. measurements of remnant dose rates for SNS spent structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popova, Irina I.; Gallmeier, Franz X.; Trotter, Steven M.
Residual dose rate measurements were conducted on target vessel #13 and proton beam window #5 after extraction from their service locations. These measurements were used to verify calculation methods of radionuclide inventory assessment that are typically performed for nuclear waste characterization and transportation of these structures. Neutronics analyses for predicting residual dose rates were carried out using the transport code MCNPX and the transmutation code CINDER90. For transport analyses complex and rigorous geometry model of the structures and their surrounding are applied. The neutronics analyses were carried out using Bertini and CEM high energy physics models for simulating particles interaction.more » Obtained preliminary calculational results were analysed and compared to the measured dose rates and overall are showing good agreement with in 40% in average.« less
New test techniques and analytical procedures for understanding the behavior of advanced propellers
NASA Technical Reports Server (NTRS)
Stefko, G. L.; Bober, L. J.; Neumann, H. E.
1983-01-01
Analytical procedures and experimental techniques were developed to improve the capability to design advanced high speed propellers. Some results from the propeller lifting line and lifting surface aerodynamic analysis codes are compared with propeller force data, probe data and laser velocimeter data. In general, the code comparisons with data indicate good qualitative agreement. A rotating propeller force balance demonstrated good accuracy and reduced test time by 50 percent. Results from three propeller flow visualization techniques are shown which illustrate some of the physical phenomena occurring on these propellers.
On Asymptotically Good Ramp Secret Sharing Schemes
NASA Astrophysics Data System (ADS)
Geil, Olav; Martin, Stefano; Martínez-Peñas, Umberto; Matsumoto, Ryutaroh; Ruano, Diego
Asymptotically good sequences of linear ramp secret sharing schemes have been intensively studied by Cramer et al. in terms of sequences of pairs of nested algebraic geometric codes. In those works the focus is on full privacy and full reconstruction. In this paper we analyze additional parameters describing the asymptotic behavior of partial information leakage and possibly also partial reconstruction giving a more complete picture of the access structure for sequences of linear ramp secret sharing schemes. Our study involves a detailed treatment of the (relative) generalized Hamming weights of the considered codes.
Thermodynamic equilibrium-air correlations for flowfield applications
NASA Technical Reports Server (NTRS)
Zoby, E. V.; Moss, J. N.
1981-01-01
Equilibrium-air thermodynamic correlations have been developed for flowfield calculation procedures. A comparison between the postshock results computed by the correlation equations and detailed chemistry calculations is very good. The thermodynamic correlations are incorporated in an approximate inviscid flowfield code with a convective heating capability for the purpose of defining the thermodynamic environment through the shock layer. Comparisons of heating rates computed by the approximate code and a viscous-shock-layer method are good. In addition to presenting the thermodynamic correlations, the impact of several viscosity models on the convective heat transfer is demonstrated.
Zebra: An advanced PWR lattice code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, L.; Wu, H.; Zheng, Y.
2012-07-01
This paper presents an overview of an advanced PWR lattice code ZEBRA developed at NECP laboratory in Xi'an Jiaotong Univ.. The multi-group cross-section library is generated from the ENDF/B-VII library by NJOY and the 361-group SHEM structure is employed. The resonance calculation module is developed based on sub-group method. The transport solver is Auto-MOC code, which is a self-developed code based on the Method of Characteristic and the customization of AutoCAD software. The whole code is well organized in a modular software structure. Some numerical results during the validation of the code demonstrate that this code has a good precisionmore » and a high efficiency. (authors)« less
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Lin, Shu
2000-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
An Interactive Concatenated Turbo Coding System
NASA Technical Reports Server (NTRS)
Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc
1999-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
Efficient Helicopter Aerodynamic and Aeroacoustic Predictions on Parallel Computers
NASA Technical Reports Server (NTRS)
Wissink, Andrew M.; Lyrintzis, Anastasios S.; Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak
1996-01-01
This paper presents parallel implementations of two codes used in a combined CFD/Kirchhoff methodology to predict the aerodynamics and aeroacoustics properties of helicopters. The rotorcraft Navier-Stokes code, TURNS, computes the aerodynamic flowfield near the helicopter blades and the Kirchhoff acoustics code computes the noise in the far field, using the TURNS solution as input. The overall parallel strategy adds MPI message passing calls to the existing serial codes to allow for communication between processors. As a result, the total code modifications required for parallel execution are relatively small. The biggest bottleneck in running the TURNS code in parallel comes from the LU-SGS algorithm that solves the implicit system of equations. We use a new hybrid domain decomposition implementation of LU-SGS to obtain good parallel performance on the SP-2. TURNS demonstrates excellent parallel speedups for quasi-steady and unsteady three-dimensional calculations of a helicopter blade in forward flight. The execution rate attained by the code on 114 processors is six times faster than the same cases run on one processor of the Cray C-90. The parallel Kirchhoff code also shows excellent parallel speedups and fast execution rates. As a performance demonstration, unsteady acoustic pressures are computed at 1886 far-field observer locations for a sample acoustics problem. The calculation requires over two hundred hours of CPU time on one C-90 processor but takes only a few hours on 80 processors of the SP2. The resultant far-field acoustic field is analyzed with state of-the-art audio and video rendering of the propagating acoustic signals.
NASA Astrophysics Data System (ADS)
Ghosh, Reetuparna; Badwar, Sylvia; Lawriniang, Bioletty; Jyrwa, Betylda; Naik, Haldhara; Naik, Yeshwant; Suryanarayana, Saraswatula Venkata; Ganesan, Srinivasan
2017-08-01
The 58Fe (p , n)58Co reaction cross-section within Giant Dipole Resonance (GDR) region i.e. from 3.38 to 19.63 MeV was measured by stacked-foil activation and off-line γ-ray spectrometric technique using the BARC-TIFR Pelletron facility at Mumbai. The present data were compared with the existing literature data and found to be in good agreement. The 58Fe (p , n)58Co reaction cross-section as a function of proton energy was also theoretically calculated by using the computer code TALYS-1.8 and found to be in good agreement, which shows the validity of the TALYS-1.8 program.
Real-Time Pattern Recognition - An Industrial Example
NASA Astrophysics Data System (ADS)
Fitton, Gary M.
1981-11-01
Rapid advancements in cost effective sensors and micro computers are now making practical the on-line implementation of pattern recognition based systems for a variety of industrial applications requiring high processing speeds. One major application area for real time pattern recognition is in the sorting of packaged/cartoned goods at high speed for automated warehousing and return goods cataloging. While there are many OCR and bar code readers available to perform these functions, it is often impractical to use such codes (package too small, adverse esthetics, poor print quality) and an approach which recognizes an item by its graphic content alone is desirable. This paper describes a specific application within the tobacco industry, that of sorting returned cigarette goods by brand and size.
New quantum codes constructed from quaternary BCH codes
NASA Astrophysics Data System (ADS)
Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena
2016-10-01
In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.
Dual Coding of Frequency Modulation in the Ventral Cochlear Nucleus.
Paraouty, Nihaad; Stasiak, Arkadiusz; Lorenzi, Christian; Varnet, Léo; Winter, Ian M
2018-04-25
Frequency modulation (FM) is a common acoustic feature of natural sounds and is known to play a role in robust sound source recognition. Auditory neurons show precise stimulus-synchronized discharge patterns that may be used for the representation of low-rate FM. However, it remains unclear whether this representation is based on synchronization to slow temporal envelope (ENV) cues resulting from cochlear filtering or phase locking to faster temporal fine structure (TFS) cues. To investigate the plausibility of those encoding schemes, single units of the ventral cochlear nucleus of guinea pigs of either sex were recorded in response to sine FM tones centered at the unit's best frequency (BF). The results show that, in contrast to high-BF units, for modulation depths within the receptive field, low-BF units (<4 kHz) demonstrate good phase locking to TFS. For modulation depths extending beyond the receptive field, the discharge patterns follow the ENV and fluctuate at the modulation rate. The receptive field proved to be a good predictor of the ENV responses for most primary-like and chopper units. The current in vivo data also reveal a high level of diversity in responses across unit types. TFS cues are mainly conveyed by low-frequency and primary-like units and ENV cues by chopper and onset units. The diversity of responses exhibited by cochlear nucleus neurons provides a neural basis for a dual-coding scheme of FM in the brainstem based on both ENV and TFS cues. SIGNIFICANCE STATEMENT Natural sounds, including speech, convey informative temporal modulations in frequency. Understanding how the auditory system represents those frequency modulations (FM) has important implications as robust sound source recognition depends crucially on the reception of low-rate FM cues. Here, we recorded 115 single-unit responses from the ventral cochlear nucleus in response to FM and provide the first physiological evidence of a dual-coding mechanism of FM via synchronization to temporal envelope cues and phase locking to temporal fine structure cues. We also demonstrate a diversity of neural responses with different coding specializations. These results support the dual-coding scheme proposed by psychophysicists to account for FM sensitivity in humans and provide new insights on how this might be implemented in the early stages of the auditory pathway. Copyright © 2018 the authors 0270-6474/18/384123-15$15.00/0.
Direct simulations of chemically reacting turbulent mixing layers, part 2
NASA Technical Reports Server (NTRS)
Metcalfe, Ralph W.; Mcmurtry, Patrick A.; Jou, Wen-Huei; Riley, James J.; Givi, Peyman
1988-01-01
The results of direct numerical simulations of chemically reacting turbulent mixing layers are presented. This is an extension of earlier work to a more detailed study of previous three dimensional simulations of cold reacting flows plus the development, validation, and use of codes to simulate chemically reacting shear layers with heat release. Additional analysis of earlier simulations showed good agreement with self similarity theory and laboratory data. Simulations with a two dimensional code including the effects of heat release showed that the rate of chemical product formation, the thickness of the mixing layer, and the amount of mass entrained into the layer all decrease with increasing rates of heat release. Subsequent three dimensional simulations showed similar behavior, in agreement with laboratory observations. Baroclinic torques and thermal expansion in the mixing layer were found to produce changes in the flame vortex structure that act to diffuse the pairing vortices, resulting in a net reduction in vorticity. Previously unexplained anomalies observed in the mean velocity profiles of reacting jets and mixing layers were shown to result from vorticity generation by baroclinic torques.
Development of tools and techniques for momentum compression of fast rare isotopes
DOE Office of Scientific and Technical Information (OSTI.GOV)
David J. Morrissey; Bradley M. Sherrill; Oleg Tarasov
2010-11-21
As part of our past research and development work, we have created and developed the LISE++ simulation code [Tar04, Tar08]. The LISE++ package was significantly extended with the addition of a Monte Carlo option that includes an option for calculating ion trajectories using a Taylor-series expansion up to fifth order, and implementation of the MOTER Monte Carlo code [Kow87] for ray tracing of the ions into the suite of LISE++ codes. The MOTER code was rewritten from FORTRAN into C++ and transported to the MS-Windows operating system. Extensive work went into the creation of a user-friendly interface for the code.more » An example of the graphical user interface created for the MOTER code is shown in the left panel of Figure 1 and the results of a typical calculation for the trajectories of particles that pass through the A1900 fragment separator are shown in the right panel. The MOTER code is presently included as part of the LISE++ package for downloading without restriction by the worldwide community. The LISE++ was extensively developed and generalized to apply to any projectile fragment separator during the early phase of this grant. In addition to the inclusion of the MOTER code, other important additions to the LISE++ code made during FY08/FY09 are listed. The LISE++ is distributed over the web (http://groups.nscl.msu.edu/lise ) and is available without charge to anyone by anonymous download, thus, the number of individual users is not recorded. The number of 'hits' on the servers that provide the LISE++ code is shown in Figure 3 for the last eight calendar years (left panel) along with the country from the IP address (right panel). The data show an increase in web-activity with the release of the new version of the program during the grant period and a worldwide impact. An important part of the proposed work carried out during FY07, FY08 and FY09 by a graduate student in the MSU Physics program was to benchmark the codes by comparison of detailed measurements to the LISE++ predictions. A large data set was obtained for fission fragments from the reaction of 238U ions at 81 MeV/u in a 92 mg/cm2 beryllium target with the A1900 projectile fragment separator. The data were analyzed and form the bulk of a Ph.D. dissertation that is nearing completion. The rich data set provides a number of benchmarks for the improved LISE++ code and only a few examples can be shown here. The primary information obtained from the measurements is the yield of the products as a function of mass, charge and momentum. Examples of the momentum distributions of individually identified fragments can be seen in Figures 2 and 4 along with comparisons to the predicted distributions. The agreement is remarkably good and indicates the general validity of the model of the nuclear reactions producing these fragments and of the higher order transmission calculations in the LISE++ code. The momentum distributions were integrated to provide the cross sections for the individual isotopes. As shown in Figure 5, there is good agreement with the model predictions although the observed cross sections are a factor of five or so higher in this case. Other comparisons of measured production cross sections from abrasion-fission reactions have been published by our group working at the NSCL during this period [Fol09] and through our collaboration with Japanese researchers working at RIKEN with the BigRIPS separator [Ohn08, Ohn10]. The agreement of the model predictions with the data obtained with two different fragment separators is very good and indicates the usefulness of the new LISE++ code.« less
PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)
NASA Astrophysics Data System (ADS)
Vincenti, Henri
2016-03-01
The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.
Parallel DSMC Solution of Three-Dimensional Flow Over a Finite Flat Plate
NASA Technical Reports Server (NTRS)
Nance, Robert P.; Wilmoth, Richard G.; Moon, Bongki; Hassan, H. A.; Saltz, Joel
1994-01-01
This paper describes a parallel implementation of the direct simulation Monte Carlo (DSMC) method. Runtime library support is used for scheduling and execution of communication between nodes, and domain decomposition is performed dynamically to maintain a good load balance. Performance tests are conducted using the code to evaluate various remapping and remapping-interval policies, and it is shown that a one-dimensional chain-partitioning method works best for the problems considered. The parallel code is then used to simulate the Mach 20 nitrogen flow over a finite-thickness flat plate. It is shown that the parallel algorithm produces results which compare well with experimental data. Moreover, it yields significantly faster execution times than the scalar code, as well as very good load-balance characteristics.
Facilities for music education and their acoustical design.
Koskinen, Heli; Toppila, Esko; Olkinuora, Pekka
2010-01-01
Good rehearsal facilities for musicians are essential. Directive 2003/10/EC necessitates that musicians are protected from noise exposure. A code of conduct gives the guidelines how this should be done. This study examines room acoustics recommendations provided by the Finnish code of conduct, and discusses whether they are adequate. Small teaching facilities were measured after renovation and compared to earlier measurements. Teachers' opinions were inquired about the facilities before and after. The renovation did not decrease the noise exposure of the teachers. However, the majority preferred the facilities after the renovation. The Finnish code of conduct is not sufficient for facilities where loud instruments are played, or band practise. Good facilities can be designed but they must be specified at the designing stage for their intended use.
Psychometric Properties of the System for Coding Couples’ Interactions in Therapy - Alcohol
Owens, Mandy D.; McCrady, Barbara S.; Borders, Adrienne Z.; Brovko, Julie M.; Pearson, Matthew R.
2014-01-01
Few systems are available for coding in-session behaviors for couples in therapy. Alcohol Behavior Couples Therapy (ABCT) is an empirically supported treatment, but little is known about its mechanisms of behavior change. In the current study, an adapted version of the Motivational Interviewing for Significant Others coding system was developed into the System for Coding Couples’ Interactions in Therapy – Alcohol (SCCIT-A), which was used to code couples’ interactions and behaviors during ABCT. Results showed good inter-rater reliability of the SCCIT-A and provided evidence that the SCCIT-A may be a promising measure for understanding couples in therapy. A three factor model of the SCCIT-A was examined (Positive, Negative, and Change Talk/Counter-Change Talk) using a confirmatory factor analysis, but model fit was poor. Due to poor model fit, ratios were computed for Positive/Negative ratings and for Change Talk/Counter-Change Talk codes based on previous research in the couples and Motivational Interviewing literature. Post-hoc analyses examined correlations between specific SCCIT-A codes and baseline characteristics and indicated some concurrent validity. Correlations were run between ratios and baseline characteristics; ratios may be an alternative to using the factors from the SCCIT-A. Reliability and validity analyses suggest that the SCCIT-A has the potential to be a useful measure for coding in-session behaviors of both partners in couples therapy and could be used to identify mechanisms of behavior change for ABCT. Additional research is needed to improve the reliability of some codes and to further develop the SCCIT-A and other measures of couples’ interactions in therapy. PMID:25528049
Chaotic CDMA watermarking algorithm for digital image in FRFT domain
NASA Astrophysics Data System (ADS)
Liu, Weizhong; Yang, Wentao; Feng, Zhuoming; Zou, Xuecheng
2007-11-01
A digital image-watermarking algorithm based on fractional Fourier transform (FRFT) domain is presented by utilizing chaotic CDMA technique in this paper. As a popular and typical transmission technique, CDMA has many advantages such as privacy, anti-jamming and low power spectral density, which can provide robustness against image distortions and malicious attempts to remove or tamper with the watermark. A super-hybrid chaotic map, with good auto-correlation and cross-correlation characteristics, is adopted to produce many quasi-orthogonal codes (QOC) that can replace the periodic PN-code used in traditional CDAM system. The watermarking data is divided into a lot of segments that correspond to different chaotic QOC respectively and are modulated into the CDMA watermarking data embedded into low-frequency amplitude coefficients of FRFT domain of the cover image. During watermark detection, each chaotic QOC extracts its corresponding watermarking segment by calculating correlation coefficients between chaotic QOC and watermarked data of the detected image. The CDMA technique not only can enhance the robustness of watermark but also can compress the data of the modulated watermark. Experimental results show that the watermarking algorithm has good performances in three aspects: better imperceptibility, anti-attack robustness and security.
NASA Astrophysics Data System (ADS)
van Heerwaarden, Chiel C.; van Stratum, Bart J. H.; Heus, Thijs; Gibbs, Jeremy A.; Fedorovich, Evgeni; Mellado, Juan Pedro
2017-08-01
This paper describes MicroHH 1.0, a new and open-source (www.microhh.org) computational fluid dynamics code for the simulation of turbulent flows in the atmosphere. It is primarily made for direct numerical simulation but also supports large-eddy simulation (LES). The paper covers the description of the governing equations, their numerical implementation, and the parameterizations included in the code. Furthermore, the paper presents the validation of the dynamical core in the form of convergence and conservation tests, and comparison of simulations of channel flows and slope flows against well-established test cases. The full numerical model, including the associated parameterizations for LES, has been tested for a set of cases under stable and unstable conditions, under the Boussinesq and anelastic approximations, and with dry and moist convection under stationary and time-varying boundary conditions. The paper presents performance tests showing good scaling from 256 to 32 768 processes. The graphical processing unit (GPU)-enabled version of the code can reach a speedup of more than an order of magnitude for simulations that fit in the memory of a single GPU.
Monte Carlo simulation of ò ó coincidence system using plastic scintillators in 4àgeometry
NASA Astrophysics Data System (ADS)
Dias, M. S.; Piuvezam-Filho, H.; Baccarelli, A. M.; Takeda, M. N.; Koskinas, M. F.
2007-09-01
A modified version of a Monte Carlo code called Esquema, developed at the Nuclear Metrology Laboratory in IPEN, São Paulo, Brazil, has been applied for simulating a 4 πβ(PS)-γ coincidence system designed for primary radionuclide standardisation. This system consists of a plastic scintillator in 4 π geometry, for alpha or electron detection, coupled to a NaI(Tl) counter for gamma-ray detection. The response curves for monoenergetic electrons and photons have been calculated previously by Penelope code and applied as input data to code Esquema. The latter code simulates all the disintegration processes, from the precursor nucleus to the ground state of the daughter radionuclide. As a result, the curve between the observed disintegration rate as a function of the beta efficiency parameter can be simulated. A least-squares fit between the experimental activity values and the Monte Carlo calculation provided the actual radioactive source activity, without need of conventional extrapolation procedures. Application of this methodology to 60Co and 133Ba radioactive sources is presented and showed results in good agreement with a conventional proportional counter 4 πβ(PC)-γ coincidence system.
NASA Technical Reports Server (NTRS)
Van Dalsem, W. R.; Steger, J. L.
1983-01-01
A new, fast, direct-inverse, finite-difference boundary-layer code has been developed and coupled with a full-potential transonic airfoil analysis code via new inviscid-viscous interaction algorithms. The resulting code has been used to calculate transonic separated flows. The results are in good agreement with Navier-Stokes calculations and experimental data. Solutions are obtained in considerably less computer time than Navier-Stokes solutions of equal resolution. Because efficient inviscid and viscous algorithms are used, it is expected this code will also compare favorably with other codes of its type as they become available.
Maximum likelihood decoding analysis of Accumulate-Repeat-Accumulate Codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
Repeat-Accumulate (RA) codes are the simplest turbo-like codes that achieve good performance. However, they cannot compete with Turbo codes or low-density parity check codes (LDPC) as far as performance is concerned. The Accumulate Repeat Accumulate (ARA) codes, as a subclass of LDPC codes, are obtained by adding a pre-coder in front of RA codes with puncturing where an accumulator is chosen as a precoder. These codes not only are very simple, but also achieve excellent performance with iterative decoding. In this paper, the performance of these codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. The weight distribution of some simple ARA codes is obtained, and through existing tightest bounds we have shown the ML SNR threshold of ARA codes approaches very closely to the performance of random codes. We have shown that the use of precoder improves the SNR threshold but interleaving gain remains unchanged with respect to RA code with puncturing.
Laser-driven planar Rayleigh-Taylor instability experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glendinning, S.G.; Weber, S.V.; Bell, P.
1992-08-24
We have performed a series of experiments on the Nova Laser Facility to examine the hydrodynamic behavior of directly driven planar foils with initial perturbations of varying wavelength. The foils were accelerated with a single, frequency doubled, smoothed and temporally shaped laser beam at 0.8{times}10{sup 14} W/cm{sup 2}. The experiments are in good agreement with numerical simulations using the computer codes LASNEX and ORCHID which show growth rates reduced to about 70% of classical for this nonlinear regime.
NASA Astrophysics Data System (ADS)
Pierazzo, E.; Artemieva, N.; Asphaug, E.; Baldwin, E. C.; Cazamias, J.; Coker, R.; Collins, G. S.; Crawford, D. A.; Davison, T.; Elbeshausen, D.; Holsapple, K. A.; Housen, K. R.; Korycansky, D. G.; Wünnemann, K.
2008-12-01
Over the last few decades, rapid improvement of computer capabilities has allowed impact cratering to be modeled with increasing complexity and realism, and has paved the way for a new era of numerical modeling of the impact process, including full, three-dimensional (3D) simulations. When properly benchmarked and validated against observation, computer models offer a powerful tool for understanding the mechanics of impact crater formation. This work presents results from the first phase of a project to benchmark and validate shock codes. A variety of 2D and 3D codes were used in this study, from commercial products like AUTODYN, to codes developed within the scientific community like SOVA, SPH, ZEUS-MP, iSALE, and codes developed at U.S. National Laboratories like CTH, SAGE/RAGE, and ALE3D. Benchmark calculations of shock wave propagation in aluminum-on-aluminum impacts were performed to examine the agreement between codes for simple idealized problems. The benchmark simulations show that variability in code results is to be expected due to differences in the underlying solution algorithm of each code, artificial stability parameters, spatial and temporal resolution, and material models. Overall, the inter-code variability in peak shock pressure as a function of distance is around 10 to 20%. In general, if the impactor is resolved by at least 20 cells across its radius, the underestimation of peak shock pressure due to spatial resolution is less than 10%. In addition to the benchmark tests, three validation tests were performed to examine the ability of the codes to reproduce the time evolution of crater radius and depth observed in vertical laboratory impacts in water and two well-characterized aluminum alloys. Results from these calculations are in good agreement with experiments. There appears to be a general tendency of shock physics codes to underestimate the radius of the forming crater. Overall, the discrepancy between the model and experiment results is between 10 and 20%, similar to the inter-code variability.
Generalized type II hybrid ARQ scheme using punctured convolutional coding
NASA Astrophysics Data System (ADS)
Kallel, Samir; Haccoun, David
1990-11-01
A method is presented to construct rate-compatible convolutional (RCC) codes from known high-rate punctured convolutional codes, obtained from best-rate 1/2 codes. The construction method is rather simple and straightforward, and still yields good codes. Moreover, low-rate codes can be obtained without any limit on the lowest achievable code rate. Based on the RCC codes, a generalized type-II hybrid ARQ scheme, which combines the benefits of the modified type-II hybrid ARQ strategy of Hagenauer (1988) with the code-combining ARQ strategy of Chase (1985), is proposed and analyzed. With the proposed generalized type-II hybrid ARQ strategy, the throughput increases as the starting coding rate increases, and as the channel degrades, it tends to merge with the throughput of rate 1/2 type-II hybrid ARQ schemes with code combining, thus allowing the system to be flexible and adaptive to channel conditions, even under wide noise variations and severe degradations.
NASA Astrophysics Data System (ADS)
Davis, S.
2004-05-01
A principal means to prevent poor exterior lighting practices is a lighting control ordinance. It is an enforceable legal restriction on specific lighting practices that are deemed unacceptable by the government body having jurisdiction. Outdoor lighting codes have proven to be effective at reducing polluting and trespassing light. A well written exterior lighting code will permit all forms of necessary illumination at reasonable intensities, but will demand shielding and other measures to prevent trespass and light pollution. A good code will also apply to all forms of outdoor lighting, including streets, highways, and exterior signs, as well as the lighting on dwellings, commercial and industrial buildings and building sites. A good code can make exceptions for special uses, provided it complies with an effective standard. The IDA Model Lighting Ordinance is a response to these requests. It is intended as an aid to communities that are seeking to take control of their outdoor lighting, to "take back the night" that is being lost to careless and excessive use of night lighting.
Employing multi-GPU power for molecular dynamics simulation: an extension of GALAMOST
NASA Astrophysics Data System (ADS)
Zhu, You-Liang; Pan, Deng; Li, Zhan-Wei; Liu, Hong; Qian, Hu-Jun; Zhao, Yang; Lu, Zhong-Yuan; Sun, Zhao-Yan
2018-04-01
We describe the algorithm of employing multi-GPU power on the basis of Message Passing Interface (MPI) domain decomposition in a molecular dynamics code, GALAMOST, which is designed for the coarse-grained simulation of soft matters. The code of multi-GPU version is developed based on our previous single-GPU version. In multi-GPU runs, one GPU takes charge of one domain and runs single-GPU code path. The communication between neighbouring domains takes a similar algorithm of CPU-based code of LAMMPS, but is optimised specifically for GPUs. We employ a memory-saving design which can enlarge maximum system size at the same device condition. An optimisation algorithm is employed to prolong the update period of neighbour list. We demonstrate good performance of multi-GPU runs on the simulation of Lennard-Jones liquid, dissipative particle dynamics liquid, polymer and nanoparticle composite, and two-patch particles on workstation. A good scaling of many nodes on cluster for two-patch particles is presented.
The "Good Housekeeping" Seal of Approval: An Historical Analysis 1909-1975.
ERIC Educational Resources Information Center
Oliver, Lauren
Examining the evolution of the "Good Housekeeping" Seal of Approval--one of the first codes to set standards for the products advertised in a periodical, a study analyzed issues of "Good Housekeeping" magazine from 1909 to 1975 (with the exception of issues from July 1929 to December 1938). The study also examined elements that…
Combinatorial neural codes from a mathematical coding theory perspective.
Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L
2013-07-01
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
The use of a panel code on high lift configurations of a swept forward wing
NASA Technical Reports Server (NTRS)
Scheib, J. S.; Sandlin, D. R.
1985-01-01
A study was done on high lift configurations of a generic swept forward wing using a panel code prediction method. A survey was done of existing codes available at Ames, frow which the program VSAERO was chosen. The results of VSAERO were compared with data obtained from the Ames 7- by 10-foot wind tunnel. The results of the comparison in lift were good (within 3.5%). The comparison of the pressure coefficients was also good. The pitching moment coefficients obtained by VSAERO were not in good agreement with experiment. VSAERO's ability to predict drag is questionable and cannot be counted on for accurate trends. Further studies were done on the effects of a leading edge glove, canards, leading edge sweeps and various wing twists on spanwise loading and trim lift with encouraging results. An unsuccessful attempt was made to model spanwise blowing and boundary layer control on the trailing edge flap. The potential results of VSAERO were compared with experimental data of flap deflections with boundary layer control to check the first order effects.
Rattanaumpawan, Pinyo; Wongkamhla, Thanyarak; Thamlikitkul, Visanu
2016-04-01
To determine the accuracy of International Statistical Classification of Disease and Related Health Problems, 10th Revision (ICD-10) coding system in identifying comorbidities and infectious conditions using data from a Thai university hospital administrative database. A retrospective cross-sectional study was conducted among patients hospitalized in six general medicine wards at Siriraj Hospital. ICD-10 code data was identified and retrieved directly from the hospital administrative database. Patient comorbidities were captured using the ICD-10 coding algorithm for the Charlson comorbidity index. Infectious conditions were captured using the groups of ICD-10 diagnostic codes that were carefully prepared by two independent infectious disease specialists. Accuracy of ICD-10 codes combined with microbiological dataf or diagnosis of urinary tract infection (UTI) and bloodstream infection (BSI) was evaluated. Clinical data gathered from chart review was considered the gold standard in this study. Between February 1 and May 31, 2013, a chart review of 546 hospitalization records was conducted. The mean age of hospitalized patients was 62.8 ± 17.8 years and 65.9% of patients were female. Median length of stay [range] was 10.0 [1.0-353.0] days and hospital mortality was 21.8%. Conditions with ICD-10 codes that had good sensitivity (90% or higher) were diabetes mellitus and HIV infection. Conditions with ICD-10 codes that had good specificity (90% or higher) were cerebrovascular disease, chronic lung disease, diabetes mellitus, cancer HIV infection, and all infectious conditions. By combining ICD-10 codes with microbiological results, sensitivity increased from 49.5 to 66%for UTI and from 78.3 to 92.8%for BS. The ICD-10 coding algorithm is reliable only in some selected conditions, including underlying diabetes mellitus and HIV infection. Combining microbiological results with ICD-10 codes increased sensitivity of ICD-10 codes for identifying BSI. Future research is needed to improve the accuracy of hospital administrative coding system in Thailand.
Ocean Color and Evidence of Chlorophyll Signature in the TOMS Minimum Reflectivity Data
NASA Technical Reports Server (NTRS)
Ahmad, Z.; Herman, J. R.; Bhartia, P. K.
2003-01-01
Analysis of the TOMS minimum reflectivity data for 380 nm channel (R380) show regions of high reflectivity values (approx. 7 to 8%) over Sargasso Sea in the Northern Atlantic, anti-cyclonic region in the Southern Atlantic, and a large part of the ocean in the Southern Pacific, and low values (5 approx. 6 %) over the rest of the open ocean. Through radiative transfer simulations we show that these features are highly correlated with the distribution of chlorophyll in the ocean. Theoretical minimum reflectivity values derived with the help of CZCS chlorophyll concentration data as input into a vector ocean-atmosphere radiative transfer code developed by Ahmad and Fraser show very good agreement with TOMS minimum reflectivity data for the winter season of year 1980. For the summer season of year 1980, good qualitative agreement is observed in the equatorial and northern hemisphere but not as good in the southern hemisphere. Also, for cloud-free conditions, we find a very strong correlation between R340 minus R380 values and the chlorophyll concentration in the ocean. Results on the possible effects of absorbing and non-absorbing aerosols on the TOMS minimum reflectivity will also be presented. The results also imply that ocean color will affect the aerosol retrieval over oceans unless corrected.
Code of Federal Regulations, 2010 CFR
2010-04-01
... TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.290 Tracking. (a) General. If you perform any... designed to facilitate effective tracking, using the distinct identification code, from the donor to the... for recording the distinct identification code and type of each HCT/P distributed to a consignee to...
76 FR 44977 - Shipping Coordinating Committee; Notice of Committee Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-27
... packing of cargo transport units. --Consideration for the efficacy of Container Inspection Programme... Dangerous Goods, Solid Cargoes and Containers (DSC 16) to be held at IMO Headquarters, London, United... Solid Bulk Cargoes Code (IMSBC Code) including evaluation of properties of solid bulk cargos. --Casualty...
Truncation Depth Rule-of-Thumb for Convolutional Codes
NASA Technical Reports Server (NTRS)
Moision, Bruce
2009-01-01
In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.
Neutrino-induced reactions on nuclei
NASA Astrophysics Data System (ADS)
Gallmeister, K.; Mosel, U.; Weil, J.
2016-09-01
Background: Long-baseline experiments such as the planned deep underground neutrino experiment (DUNE) require theoretical descriptions of the complete event in a neutrino-nucleus reaction. Since nuclear targets are used this requires a good understanding of neutrino-nucleus interactions. Purpose: Develop a consistent theory and code framework for the description of lepton-nucleus interactions that can be used to describe not only inclusive cross sections, but also the complete final state of the reaction. Methods: The Giessen-Boltzmann-Uehling-Uhlenbeck (GiBUU) implementation of quantum-kinetic transport theory is used, with improvements in its treatment of the nuclear ground state and of 2p2h interactions. For the latter an empirical structure function from electron scattering data is used as a basis. Results: Results for electron-induced inclusive cross sections are given as a necessary check for the overall quality of this approach. The calculated neutrino-induced inclusive double-differential cross sections show good agreement data from neutrino and antineutrino reactions for different neutrino flavors at MiniBooNE and T2K. Inclusive double-differential cross sections for MicroBooNE, NOvA, MINERvA, and LBNF/DUNE are given. Conclusions: Based on the GiBUU model of lepton-nucleus interactions a good theoretical description of inclusive electron-, neutrino-, and antineutrino-nucleus data over a wide range of energies, different neutrino flavors, and different experiments is now possible. Since no tuning is involved this theory and code should be reliable also for new energy regimes and target masses.
Microdosimetric investigation of the spectra from YAYOI by use of the Monte Carlo code PHITS.
Nakao, Minoru; Baba, Hiromi; Oishi, Ayumu; Onizuka, Yoshihiko
2010-07-01
The purpose of this study was to obtain the neutron energy spectrum on the surface of the moderator of the Tokyo University reactor YAYOI and to investigate the origins of peaks observed in the neutron energy spectrum by use of the Monte Carlo Code PHITS for evaluating biological studies. The moderator system was modeled with the use of details from an article that reported a calculation result and a measurement result for a neutron spectrum on the surface of the moderator of the reactor. Our calculation results with PHITS were compared to those obtained with the discrete ordinate code ANISN described in the article. In addition, the changes in the neutron spectrum at the boundaries of materials in the moderator system were examined with PHITS. Also, microdosimetric energy distributions of secondary charged particles from neutron recoil or reaction were calculated by use of PHITS and compared with a microdosimetric experiment. Our calculations of the neutron energy spectrum with PHITS showed good agreement with the results of ANISN in terms of the energy and structure of the peaks. However, the microdosimetric dose distribution spectrum with PHITS showed a remarkable discrepancy with the experimental one. The experimental spectrum could not be explained by PHITS when we used neutron beams of two mono-energies.
Reactivity Coefficient Calculation for AP1000 Reactor Using the NODAL3 Code
NASA Astrophysics Data System (ADS)
Pinem, Surian; Malem Sembiring, Tagor; Tukiran; Deswandri; Sunaryo, Geni Rina
2018-02-01
The reactivity coefficient is a very important parameter for inherent safety and stability of nuclear reactors operation. To provide the safety analysis of the reactor, the calculation of changes in reactivity caused by temperature is necessary because it is related to the reactor operation. In this paper, the temperature reactivity coefficients of fuel and moderator of the AP1000 core are calculated, as well as the moderator density and boron concentration. All of these coefficients are calculated at the hot full power condition (HFP). All neutron diffusion constant as a function of temperature, water density and boron concentration were generated by the SRAC2006 code. The core calculations for determination of the reactivity coefficient parameter are done by using NODAL3 code. The calculation results show that the fuel temperature, moderator temperature and boron reactivity coefficients are in the range between -2.613 pcm/°C to -4.657pcm/°C, -1.00518 pcm/°C to 1.00649 pcm/°C and -9.11361 pcm/ppm to -8.0751 pcm/ppm, respectively. For the water density reactivity coefficients, the positive reactivity occurs at the water temperature less than 190 °C. The calculation results show that the reactivity coefficients are accurate because the results have a very good agreement with the design value.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vukovic, M.; Harper, M.; Breun, R.
1995-12-31
Current drive experiments on the Phaedrus-T tokamak performed with a low field side two-strap fast wave antenna at frequencies below {omega}{sub cH} show loop volt drops of up to 30% with strap phasing (0, {pi}/2). RF induced density fluctuations in the plasma core have also been observed with a microwave reflectometer. It is believed that they are caused by kinetic Alfven waves generated by mode conversion of fast waves at the Alfven resonance. Correlation of the observed density fluctuations with the magnitude of the {Delta}V{sub loop} suggest that the {Delta}V{sub loop} is attributable to current drive/heating due to mode convertedmore » kinetic Alfven waves. The toroidal cold plasma wave code LION is used to model the Alfven resonance mode conversion surfaces in the experiments while the cylindrical hot plasma kinetic wave code ISMENE is used to model the behavior of kinetic Alfven waves at the Alfven resonance location. Initial results obtained from limited density, magnetic field, antenna phase, and impurity scans show good agreement between the RF induced density fluctuations and the predicted behavior of the kinetic Alfven waves. Detailed comparisons between the density fluctuations and the code predictions are presented.« less
Multidimensional incremental parsing for universal source coding.
Bae, Soo Hyun; Juang, Biing-Hwang
2008-10-01
A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.
Fluid Transient Analysis during Priming of Evacuated Line
NASA Technical Reports Server (NTRS)
Bandyopadhyay, Alak; Majumdar, Alok K.; Holt, Kimberley
2017-01-01
Water hammer analysis in pipe lines, in particularly during priming into evacuated lines is important for the design of spacecraft and other in-space application. In the current study, a finite volume network flow analysis code is used for modeling three different geometrical configurations: the first two being straight pipe, one with atmospheric air and other with evacuated line, and the third case is a representation of a complex flow network system. The numerical results show very good agreement qualitatively and quantitatively with measured data available in the literature. The peak pressure and impact time in case of straight pipe priming in evacuated line shows excellent agreement.
Aeroelastic Calculations Using CFD for a Typical Business Jet Model
NASA Technical Reports Server (NTRS)
Gibbons, Michael D.
1996-01-01
Two time-accurate Computational Fluid Dynamics (CFD) codes were used to compute several flutter points for a typical business jet model. The model consisted of a rigid fuselage with a flexible semispan wing and was tested in the Transonic Dynamics Tunnel at NASA Langley Research Center where experimental flutter data were obtained from M(sub infinity) = 0.628 to M(sub infinity) = 0.888. The computational results were computed using CFD codes based on the inviscid TSD equation (CAP-TSD) and the Euler/Navier-Stokes equations (CFL3D-AE). Comparisons are made between analytical results and with experiment where appropriate. The results presented here show that the Navier-Stokes method is required near the transonic dip due to the strong viscous effects while the TSD and Euler methods used here provide good results at the lower Mach numbers.
Coupling LAMMPS with Lattice Boltzmann fluid solver: theory, implementation, and applications
NASA Astrophysics Data System (ADS)
Tan, Jifu; Sinno, Talid; Diamond, Scott
2016-11-01
Studying of fluid flow coupled with solid has many applications in biological and engineering problems, e.g., blood cell transport, particulate flow, drug delivery. We present a partitioned approach to solve the coupled Multiphysics problem. The fluid motion is solved by the Lattice Boltzmann method, while the solid displacement and deformation is simulated by Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS). The coupling is achieved through the immersed boundary method so that the expensive remeshing step is eliminated. The code can model both rigid and deformable solids. The code also shows very good scaling results. It was validated with classic problems such as migration of rigid particles, ellipsoid particle's orbit in shear flow. Examples of the applications in blood flow, drug delivery, platelet adhesion and rupture are also given in the paper. NIH.
Social Work and End-of-Life Decisions: Self-Determination and the Common Good.
ERIC Educational Resources Information Center
Wesley, Carol A.
1996-01-01
Self-determination and the common good must be respected in social work practice and policy regarding end-of-life decisions. This article discusses self-determination in end-of-life decision making, ethical decision making and the NASW Code of Ethics, and professional ethics based on a balanced view of both self-determination and the common good.…
Code of Federal Regulations, 2013 CFR
2013-10-01
... the Safe Transport of Dangerous Goods by Air and the International Maritime Dangerous Goods Code). Any... responsible, under its national law, for the control or regulation of some aspect of hazardous materials... used in the International Civil Aviation Organization's (ICAO) Technical Instructions for the Safe...
Code of Federal Regulations, 2011 CFR
2011-10-01
... the Safe Transport of Dangerous Goods by Air and the International Maritime Dangerous Goods Code). Any... responsible, under its national law, for the control or regulation of some aspect of hazardous materials... used in the International Civil Aviation Organization's (ICAO) Technical Instructions for the Safe...
Code of Federal Regulations, 2014 CFR
2014-10-01
... the Safe Transport of Dangerous Goods by Air and the International Maritime Dangerous Goods Code). Any... responsible, under its national law, for the control or regulation of some aspect of hazardous materials... used in the International Civil Aviation Organization's (ICAO) Technical Instructions for the Safe...
High performance Python for direct numerical simulations of turbulent flows
NASA Astrophysics Data System (ADS)
Mortensen, Mikael; Langtangen, Hans Petter
2016-06-01
Direct Numerical Simulations (DNS) of the Navier Stokes equations is an invaluable research tool in fluid dynamics. Still, there are few publicly available research codes and, due to the heavy number crunching implied, available codes are usually written in low-level languages such as C/C++ or Fortran. In this paper we describe a pure scientific Python pseudo-spectral DNS code that nearly matches the performance of C++ for thousands of processors and billions of unknowns. We also describe a version optimized through Cython, that is found to match the speed of C++. The solvers are written from scratch in Python, both the mesh, the MPI domain decomposition, and the temporal integrators. The solvers have been verified and benchmarked on the Shaheen supercomputer at the KAUST supercomputing laboratory, and we are able to show very good scaling up to several thousand cores. A very important part of the implementation is the mesh decomposition (we implement both slab and pencil decompositions) and 3D parallel Fast Fourier Transforms (FFT). The mesh decomposition and FFT routines have been implemented in Python using serial FFT routines (either NumPy, pyFFTW or any other serial FFT module), NumPy array manipulations and with MPI communications handled by MPI for Python (mpi4py). We show how we are able to execute a 3D parallel FFT in Python for a slab mesh decomposition using 4 lines of compact Python code, for which the parallel performance on Shaheen is found to be slightly better than similar routines provided through the FFTW library. For a pencil mesh decomposition 7 lines of code is required to execute a transform.
Lim, Robyn R
2007-08-01
This article describes some work from the Therapeutic Products Directorate of Health Canada regarding Good Review Practices (GRP). Background information is provided on the Therapeutic Products Directorate (TPD) and its regulatory activities regarding drug and medical device assessment in both the pre- and post-market setting. The TPD Good Review Guiding Principles (GRGP) are described which include a Definition of a Good Therapeutic Product Regulatory Review, Ten Hallmarks of a Good Therapeutic Product Regulatory Review and Ten Precepts. Analysis of the guiding principles discusses possible linkages between the guiding principles and intellectual virtues. Through this analysis an hypothesis is developed that the guiding principles outline a code of intellectual conduct for Health Canada's reviewers of evidence for efficacy, safety, manufacturing quality and benefit-risk regarding therapeutic products. Opportunities to advance therapeutic product regulatory review as a scientific discipline in its own right and to acknowledge that these reviewers constitute a specific community of practice are discussed. Integration of intellectual and ethical approaches across therapeutic product review sectors is also suggested.
NASA Astrophysics Data System (ADS)
Duc-Toan, Nguyen; Tien-Long, Banh; Young-Suk, Kim; Dong-Won, Jung
2011-08-01
In this study, a modified Johnson-Cook (J-C) model and an innovated method to determine (J-C) material parameters are proposed to predict more correctly stress-strain curve for tensile tests in elevated temperatures. A MATLAB tool is used to determine material parameters by fitting a curve to follow Ludwick's hardening law at various elevated temperatures. Those hardening law parameters are then utilized to determine modified (J-C) model material parameters. The modified (J-C) model shows the better prediction compared to the conventional one. As the first verification, an FEM tensile test simulation based on the isotropic hardening model for boron sheet steel at elevated temperatures was carried out via a user-material subroutine, using an explicit finite element code, and compared with the measurements. The temperature decrease of all elements due to the air cooling process was then calculated when considering the modified (J-C) model and coded to VUMAT subroutine for tensile test simulation of cooling process. The modified (J-C) model showed the good agreement between the simulation results and the corresponding experiments. The second investigation was applied for V-bending spring-back prediction of magnesium alloy sheets at elevated temperatures. Here, the combination of proposed J-C model with modified hardening law considering the unusual plastic behaviour for magnesium alloy sheet was adopted for FEM simulation of V-bending spring-back prediction and shown the good comparability with corresponding experiments.
Computer modeling of pulsed CO2 lasers for lidar applications
NASA Technical Reports Server (NTRS)
Spiers, Gary D.
1993-01-01
The object of this effort is to develop code to enable the accurate prediction of the performance of pulsed transversely excited (TE) CO2 lasers prior to their construction. This is of particular benefit to the NASA Laser Atmospheric Wind Sounder (LAWS) project. A benefit of the completed code is that although developed specifically for the pulsed CO2 laser much of the code can be modified to model other laser systems of interest to the lidar community. A Boltzmann equation solver has been developed which enables the electron excitation rates for the vibrational levels of CO2 and N2, together with the electron ionization and attachment coefficients to be determined for any CO2 laser gas mixture consisting of a combination of CO2, N2, CO, He and CO. The validity of the model has been verified by comparison with published material. The results from the Boltzmann equation solver have been used as input to the laser kinetics code which is currently under development. A numerical code to model the laser induced medium perturbation (LIMP) arising from the relaxation of the lower laser level has been developed and used to determine the effect of LIMP on the frequency spectrum of the LAWS laser output pulse. The enclosed figures show representative results for a laser operating at 0.5 atm. with a discharge cross-section of 4.5 cm to produce a 20 J pulse with aFWHM of 3.1 microns. The first four plots show the temporal evolution of the laser pulse power, energy evolution, LIMP frequency chirp and electric field magnitude. The electric field magnitude is taken by beating the calculated complex electric field and beating it with a local oscillator signal. The remaining two figures show the power spectrum and energy distribution in the pulse as a function of the varying pulse frequency. The LIMP theory has been compared with experimental data from the NOAA Windvan Lidar and has been found to be in good agreement.
NASA Astrophysics Data System (ADS)
Boss, Alan P.
2009-03-01
The disk instability mechanism for giant planet formation is based on the formation of clumps in a marginally gravitationally unstable protoplanetary disk, which must lose thermal energy through a combination of convection and radiative cooling if they are to survive and contract to become giant protoplanets. While there is good observational support for forming at least some giant planets by disk instability, the mechanism has become theoretically contentious, with different three-dimensional radiative hydrodynamics codes often yielding different results. Rigorous code testing is required to make further progress. Here we present two new analytical solutions for radiative transfer in spherical coordinates, suitable for testing the code employed in all of the Boss disk instability calculations. The testing shows that the Boss code radiative transfer routines do an excellent job of relaxing to and maintaining the analytical results for the radial temperature and radiative flux profiles for a spherical cloud with high or moderate optical depths, including the transition from optically thick to optically thin regions. These radial test results are independent of whether the Eddington approximation, diffusion approximation, or flux-limited diffusion approximation routines are employed. The Boss code does an equally excellent job of relaxing to and maintaining the analytical results for the vertical (θ) temperature and radiative flux profiles for a disk with a height proportional to the radial distance. These tests strongly support the disk instability mechanism for forming giant planets.
Simulation of prompt gamma-ray emission during proton radiotherapy.
Verburg, Joost M; Shih, Helen A; Seco, Joao
2012-09-07
The measurement of prompt gamma rays emitted from proton-induced nuclear reactions has been proposed as a method to verify in vivo the range of a clinical proton radiotherapy beam. A good understanding of the prompt gamma-ray emission during proton therapy is key to develop a clinically feasible technique, as it can facilitate accurate simulations and uncertainty analysis of gamma detector designs. Also, the gamma production cross-sections may be incorporated as prior knowledge in the reconstruction of the proton range from the measurements. In this work, we performed simulations of proton-induced nuclear reactions with the main elements of human tissue, carbon-12, oxygen-16 and nitrogen-14, using the nuclear reaction models of the GEANT4 and MCNP6 Monte Carlo codes and the dedicated nuclear reaction codes TALYS and EMPIRE. For each code, we made an effort to optimize the input parameters and model selection. The results of the models were compared to available experimental data of discrete gamma line cross-sections. Overall, the dedicated nuclear reaction codes reproduced the experimental data more consistently, while the Monte Carlo codes showed larger discrepancies for a number of gamma lines. The model differences lead to a variation of the total gamma production near the end of the proton range by a factor of about 2. These results indicate a need for additional theoretical and experimental study of proton-induced gamma emission in human tissue.
Topics in quantum cryptography, quantum error correction, and channel simulation
NASA Astrophysics Data System (ADS)
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel simulation with quantum side information at the receiver. Our main theorem has two important corollaries: rate-distortion theory with quantum side information and common randomness distillation. Simple proofs of achievability of classical multi-terminal source coding problems can be made via a unified approach using the channel simulation theorem as building blocks. The fully quantum generalization of the problem is also conjectured with outer and inner bounds on the achievable rate pairs.
Optical network security using unipolar Walsh code
NASA Astrophysics Data System (ADS)
Sikder, Somali; Sarkar, Madhumita; Ghosh, Shila
2018-04-01
Optical code-division multiple-access (OCDMA) is considered as a good technique to provide optical layer security. Many research works have been published to enhance optical network security by using optical signal processing. The paper, demonstrates the design of the AWG (arrayed waveguide grating) router-based optical network for spectral-amplitude-coding (SAC) OCDMA networks with Walsh Code to design a reconfigurable network codec by changing signature codes to against eavesdropping. In this paper we proposed a code reconfiguration scheme to improve the network access confidentiality changing the signature codes by cyclic rotations, for OCDMA system. Each of the OCDMA network users is assigned a unique signature code to transmit the information and at the receiving end each receiver correlates its own signature pattern a(n) with the receiving pattern s(n). The signal arriving at proper destination leads to s(n)=a(n).
A Review on Spectral Amplitude Coding Optical Code Division Multiple Access
NASA Astrophysics Data System (ADS)
Kaur, Navpreet; Goyal, Rakesh; Rani, Monika
2017-06-01
This manuscript deals with analysis of Spectral Amplitude Coding Optical Code Division Multiple Access (SACOCDMA) system. The major noise source in optical CDMA is co-channel interference from other users known as multiple access interference (MAI). The system performance in terms of bit error rate (BER) degrades as a result of increased MAI. It is perceived that number of users and type of codes used for optical system directly decide the performance of system. MAI can be restricted by efficient designing of optical codes and implementing them with unique architecture to accommodate more number of users. Hence, it is a necessity to design a technique like spectral direct detection (SDD) technique with modified double weight code, which can provide better cardinality and good correlation property.
33 CFR 401.68 - Explosives permission letter.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., DEPARTMENT OF TRANSPORTATION SEAWAY REGULATIONS AND RULES Regulations Dangerous Cargo § 401.68 Explosives... Dangerous Goods Regulations (Canada), the United States regulations under the Dangerous Cargo Act and the International Maritime Dangerous Goods Code, may be made to the St. Lawrence Seaway Management Corporation, 202...
33 CFR 401.68 - Explosives Permission Letter.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., DEPARTMENT OF TRANSPORTATION SEAWAY REGULATIONS AND RULES Regulations Dangerous Cargo § 401.68 Explosives... Regulations respecting the Carriage of Dangerous Goods, the United States Regulations under the Dangerous Cargo Act and the International Maritime Dangerous Goods Code may be made to the Saint Lawrence Seaway...
33 CFR 401.68 - Explosives Permission Letter.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., DEPARTMENT OF TRANSPORTATION SEAWAY REGULATIONS AND RULES Regulations Dangerous Cargo § 401.68 Explosives... Regulations respecting the Carriage of Dangerous Goods, the United States Regulations under the Dangerous Cargo Act and the International Maritime Dangerous Goods Code may be made to the Saint Lawrence Seaway...
Code of Federal Regulations, 2013 CFR
2013-10-01
... the Safe Transport of Dangerous Goods by Air and the International Maritime Dangerous Goods Code). Any... is responsible, under its national law, for the control or regulation of some aspect of hazardous...” which is used in the International Civil Aviation Organization's (ICAO) Technical Instructions for the...
Code of Federal Regulations, 2012 CFR
2012-10-01
... the Safe Transport of Dangerous Goods by Air and the International Maritime Dangerous Goods Code). Any... is responsible, under its national law, for the control or regulation of some aspect of hazardous...” which is used in the International Civil Aviation Organization's (ICAO) Technical Instructions for the...
Code of Federal Regulations, 2014 CFR
2014-10-01
... the Safe Transport of Dangerous Goods by Air and the International Maritime Dangerous Goods Code). Any... is responsible, under its national law, for the control or regulation of some aspect of hazardous...” which is used in the International Civil Aviation Organization's (ICAO) Technical Instructions for the...
Code of Federal Regulations, 2011 CFR
2011-10-01
... the Safe Transport of Dangerous Goods by Air and the International Maritime Dangerous Goods Code). Any... is responsible, under its national law, for the control or regulation of some aspect of hazardous...” which is used in the International Civil Aviation Organization's (ICAO) Technical Instructions for the...
Design optimization of beta- and photovoltaic conversion devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wichner, R.; Blum, A.; Fischer-Colbrie, E.
1976-01-08
This report presents the theoretical and experimental results of an LLL Electronics Engineering research program aimed at optimizing the design and electronic-material parameters of beta- and photovoltaic p-n junction conversion devices. To meet this objective, a comprehensive computer code has been developed that can handle a broad range of practical conditions. The physical model upon which the code is based is described first. Then, an example is given of a set of optimization calculations along with the resulting optimized efficiencies for silicon (Si) and gallium-arsenide (GaAs) devices. The model we have developed, however, is not limited to these materials. Itmore » can handle any appropriate material--single or polycrystalline-- provided energy absorption and electron-transport data are available. To check code validity, the performance of experimental silicon p-n junction devices (produced in-house) were measured under various light intensities and spectra as well as under tritium beta irradiation. The results of these tests were then compared with predicted results based on the known or best estimated device parameters. The comparison showed very good agreement between the calculated and the measured results.« less
Measurement and prediction of model-rotor flow fields
NASA Technical Reports Server (NTRS)
Owen, F. K.; Tauber, M. E.
1985-01-01
This paper shows that a laser velocimeter can be used to measure accurately the three-component velocities induced by a model rotor at transonic tip speeds. The measurements, which were made at Mach numbers from 0.85 to 0.95 and at zero advance ratio, yielded high-resolution, orthogonal velocity values. The measured velocities were used to check the ability of the ROT22 full-potential rotor code to predict accurately the transonic flow field in the crucial region around and beyond the tip of a high-speed rotor blade. The good agreement between the calculated and measured velocities established the code's ability to predict the off-blade flow field at transonic tip speeds. This supplements previous comparisons in which surface pressures were shown to be well predicted on two different tips at advance ratios to 0.45, especially at the critical 90 deg azimuthal blade position. These results demonstrate that the ROT22 code can be used with confidence to predict the important tip-region flow field, including the occurrence, strength, and location of shock waves causing high drag and noise.
High Resolution Aerospace Applications using the NASA Columbia Supercomputer
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.; Aftosmis, Michael J.; Berger, Marsha
2005-01-01
This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fidelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages. are industrial-level codes designed for complex geometry and incorpor.ats. CuStomized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InfiniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 CPUs using the NUMAIink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InfiniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/OpenMP communication should scale well on even more processors.
Features of Discontinuous Galerkin Algorithms in Gkeyll, and Exponentially-Weighted Basis Functions
NASA Astrophysics Data System (ADS)
Hammett, G. W.; Hakim, A.; Shi, E. L.
2016-10-01
There are various versions of Discontinuous Galerkin (DG) algorithms that have interesting features that could help with challenging problems of higher-dimensional kinetic problems (such as edge turbulence in tokamaks and stellarators). We are developing the gyrokinetic code Gkeyll based on DG methods. Higher-order methods do more FLOPS to extract more information per byte, thus reducing memory and communication costs (which are a bottleneck for exascale computing). The inner product norm can be chosen to preserve energy conservation with non-polynomial basis functions (such as Maxwellian-weighted bases), which alternatively can be viewed as a Petrov-Galerkin method. This allows a full- F code to benefit from similar Gaussian quadrature employed in popular δf continuum gyrokinetic codes. We show some tests for a 1D Spitzer-Härm heat flux problem, which requires good resolution for the tail. For two velocity dimensions, this approach could lead to a factor of 10 or more speedup. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.
Self-recovery fragile watermarking algorithm based on SPHIT
NASA Astrophysics Data System (ADS)
Xin, Li Ping
2015-12-01
A fragile watermark algorithm is proposed, based on SPIHT coding, which can recover the primary image itself. The novelty of the algorithm is that it can tamper location and Self-restoration. The recovery has been very good effect. The first, utilizing the zero-tree structure, the algorithm compresses and encodes the image itself, and then gained self correlative watermark data, so as to greatly reduce the quantity of embedding watermark. Then the watermark data is encoded by error correcting code, and the check bits and watermark bits are scrambled and embedded to enhance the recovery ability. At the same time, by embedding watermark into the latter two bit place of gray level image's bit-plane code, the image after embedded watermark can gain nicer visual effect. The experiment results show that the proposed algorithm may not only detect various processing such as noise adding, cropping, and filtering, but also recover tampered image and realize blind-detection. Peak signal-to-noise ratios of the watermark image were higher than other similar algorithm. The attack capability of the algorithm was enhanced.
Recent progress in the analysis of iced airfoils and wings
NASA Technical Reports Server (NTRS)
Cebeci, Tuncer; Chen, Hsun H.; Kaups, Kalle; Schimke, Sue
1992-01-01
Recent work on the analysis of iced airfoils and wings is described. Ice shapes for multielement airfoils and wings are computed using an extension of the LEWICE code that was developed for single airfoils. The aerodynamic properties of the iced wing are determined with an interactive scheme in which the solutions of the inviscid flow equations are obtained from a panel method and the solutions of the viscous flow equations are obtained from an inverse three-dimensional finite-difference boundary-layer method. A new interaction law is used to couple the inviscid and viscous flow solutions. The newly developed LEWICE multielement code is amplified to a high-lift configuration to calculate the ice shapes on the slat and on the main airfoil and on a four-element airfoil. The application of the LEWICE wing code to the calculation of ice shapes on a MS-317 swept wing shows good agreement with measurements. The interactive boundary-layer method is applied to a tapered iced wing in order to study the effect of icing on the aerodynamic properties of the wing at several angles of attack.
Full wave simulations of helicon wave losses in the scrape-off-layer of the DIII-D tokamak
NASA Astrophysics Data System (ADS)
Lau, Cornwall; Jaeger, Fred; Berry, Lee; Bertelli, Nicola; Pinsker, Robert
2017-10-01
Helicon waves have been recently proposed as an off-axis current drive actuator for DIII-D. Previous modeling using the hot plasma, full wave code AORSA, has shown good agreement with the ray tracing code GENRAY for helicon wave propagation and absorption in the core plasma. AORSA, and a new, reduced finite-element-model show discrepancies between ray tracing and full wave occur in the scrape-off-layer (SOL), especially at high densities. The reduced model is much faster than AORSA, and reproduces most of the important features of the AORSA model. The reduced model also allows for larger parametric scans and for the easy use of arbitrary tokamak geometry. Results of the full wave codes, AORSA and COMSOL, will be shown for helicon wave losses in the SOL are shown for a large range of parameters, such as SOL density profiles, n||, radial and vertical locations of the antenna, and different tokamak vessel geometries. This work was supported by DE-AC05-00OR22725, DE-AC02-09CH11466, and DE-FC02-04ER54698.
NASA Astrophysics Data System (ADS)
Giovannoli, E.; Buat, V.
2013-03-01
We use the code CIGALE (Code Investigating Galaxies Emission: Burgarella et al. 2005; Noll et al. 2009) which provides physical information about galaxies by fitting their UV (ultraviolet)-to-IR (infrared) spectral energy distribuition (SED). CIGALE is based on the use of a UV-optical stellar SED plus a dust IR-emitting component. We study a sample of 136 Luminous Infrared Galaxies (LIRGs) at z˜0.7 in the ECDF-S previously studied in Giovannoli et al. (2011). We focus on the way the empirical Dale & Helou (2002) templates reproduce the observed SEDs of the LIRGs. Fig. 1 shows the total infrared luminosity (L IR ) provided by CIGALE using the 64 templates (x axis) and using 2 templates (y axis) representative of the whole sample. Despite the larger dispersion when only 1 or 2 Herschel data are available, the agreement between both values is good with Δ log L IR = 0.0013 ± 0.045 dex. We conclude that 2 IR SEDs can be used alone to determine the L IR of LIRGs at z˜0.7 in an SED-fitting procedure.
New beam monitoring tool for radiobiology experiments at the cyclotron ARRONAX.
Schwob, L; Koumeir, C; Servagent, N; Cherel, M; Guertin, A; Haddad, F; Métivier, V; Michel, N; Poirier, F; Rahmani, A; Varmenot, N
2015-09-01
The ARRONAX cyclotron is able to deliver alpha particles at 68 MeV. In the frame of radiological research, a new method is studied to infer in situ the deposited dose: it is based on the online measurement of the bremsstrahlung (>1 keV) produced by the interaction of the incident particle with the medium. Experiments are made using bombarded poly(methyl methacrylate) (PMMA)-equivalent water targets in order to characterise this continuous X-ray spectrum. The intensity of the bremsstrahlung spectrum allows for the beam monitoring. A simulation code of the bremsstrahlung has been built, and a good agreement is found with the experimental spectra. With this simulation, it is possible to predict the sensibility of this method: it varies with the target thickness, showing a good sensibility for thin target (<1000 µm) and saturation for thicker ones. Bremsstrahlung spectrum also shows a sensibility on the target's chemical composition. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Pressure wave propagation studies for oscillating cascades
NASA Technical Reports Server (NTRS)
Huff, Dennis L.
1992-01-01
The unsteady flow field around an oscillating cascade of flat plates is studied using a time marching Euler code. Exact solutions based on linear theory serve as model problems to study pressure wave propagation in the numerical solution. The importance of using proper unsteady boundary conditions, grid resolution, and time step is demonstrated. Results show that an approximate non-reflecting boundary condition based on linear theory does a good job of minimizing reflections from the inflow and outflow boundaries and allows the placement of the boundaries to be closer than cases using reflective boundary conditions. Stretching the boundary to dampen the unsteady waves is another way to minimize reflections. Grid clustering near the plates does a better job of capturing the unsteady flow field than cases using uniform grids as long as the CFL number is less than one for a sufficient portion of the grid. Results for various stagger angles and oscillation frequencies show good agreement with linear theory as long as the grid is properly resolved.
NASA Technical Reports Server (NTRS)
Gardner, Kevin D.; Liu, Jong-Shang; Murthy, Durbha V.; Kruse, Marlin J.; James, Darrell
1999-01-01
AlliedSignal Engines, in cooperation with NASA GRC (National Aeronautics and Space Administration Glenn Research Center), completed an evaluation of recently-developed aeroelastic computer codes using test cases from the AlliedSignal Engines fan blisk and turbine databases. Test data included strain gage, performance, and steady-state pressure information obtained for conditions where synchronous or flutter vibratory conditions were found to occur. Aeroelastic codes evaluated included quasi 3-D UNSFLO (MIT Developed/AE Modified, Quasi 3-D Aeroelastic Computer Code), 2-D FREPS (NASA-Developed Forced Response Prediction System Aeroelastic Computer Code), and 3-D TURBO-AE (NASA/Mississippi State University Developed 3-D Aeroelastic Computer Code). Unsteady pressure predictions for the turbine test case were used to evaluate the forced response prediction capabilities of each of the three aeroelastic codes. Additionally, one of the fan flutter cases was evaluated using TURBO-AE. The UNSFLO and FREPS evaluation predictions showed good agreement with the experimental test data trends, but quantitative improvements are needed. UNSFLO over-predicted turbine blade response reductions, while FREPS under-predicted them. The inviscid TURBO-AE turbine analysis predicted no discernible blade response reduction, indicating the necessity of including viscous effects for this test case. For the TURBO-AE fan blisk test case, significant effort was expended getting the viscous version of the code to give converged steady flow solutions for the transonic flow conditions. Once converged, the steady solutions provided an excellent match with test data and the calibrated DAWES (AlliedSignal 3-D Viscous Steady Flow CFD Solver). However, efforts expended establishing quality steady-state solutions prevented exercising the unsteady portion of the TURBO-AE code during the present program. AlliedSignal recommends that unsteady pressure measurement data be obtained for both test cases examined for use in aeroelastic code validation.
NASA Technical Reports Server (NTRS)
Rhode, M. N.; Engelund, Walter C.; Mendenhall, Michael R.
1995-01-01
Experimental longitudinal and lateral-directional aerodynamic characteristics were obtained for the Pegasus and Pegasus XL configurations over a Mach number range from 1.6 to 6 and angles of attack from -4 to +24 degrees. Angle of sideslip was varied from -6 to +6 degrees, and control surfaces were deflected to obtain elevon, aileron, and rudder effectiveness. Experimental data for the Pegasus configuration are compared with engineering code predictions performed by Nielsen Engineering & Research, Inc. (NEAR) in the aerodynamic design of the Pegasus vehicle, and with results from the Aerodynamic Preliminary Analysis System (APAS) code. Comparisons of experimental results are also made with longitudinal flight data from Flight #2 of the Pegasus vehicle. Results show that the longitudinal aerodynamic characteristics of the Pegasus and Pegasus XL configurations are similar, having the same lift-curve slope and drag levels across the Mach number range. Both configurations are longitudinally stable, with stability decreasing towards neutral levels as Mach number increases. Directional stability is negative at moderate to high angles of attack due to separated flow over the vertical tail. Dihedral effect is positive for both configurations, but is reduced 30-50 percent for the Pegasus XL configuration because of the horizontal tail anhedral. Predicted longitudinal characteristics and both longitudinal and lateral-directional control effectiveness are generally in good agreement with experiment. Due to the complex leeside flowfield, lateral-directional characteristics are not as well predicted by the engineering codes. Experiment and flight data are in good agreement across the Mach number range.
Decisions: "Carltona" and the CUC Code
ERIC Educational Resources Information Center
Evans, G. R.
2006-01-01
The Committee of University Chairman publishes a code of good practice designed, among other things, to ensure clarity about the authority on which decisions are taken on behalf of universities, subordinate domestic legislation created and the exercise of discretion regulated. In Carltona Ltd.v. Commissioners of Works [1943] 2 All ER 560 AC the…
Implementation of algebraic stress models in a general 3-D Navier-Stokes method (PAB3D)
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.
1995-01-01
A three-dimensional multiblock Navier-Stokes code, PAB3D, which was developed for propulsion integration and general aerodynamic analysis, has been used extensively by NASA Langley and other organizations to perform both internal (exhaust) and external flow analysis of complex aircraft configurations. This code was designed to solve the simplified Reynolds Averaged Navier-Stokes equations. A two-equation k-epsilon turbulence model has been used with considerable success, especially for attached flows. Accurate predicting of transonic shock wave location and pressure recovery in separated flow regions has been more difficult. Two algebraic Reynolds stress models (ASM) have been recently implemented in the code that greatly improved the code's ability to predict these difficult flow conditions. Good agreement with Direct Numerical Simulation (DNS) for a subsonic flat plate was achieved with ASM's developed by Shih, Zhu, and Lumley and Gatski and Speziale. Good predictions were also achieved at subsonic and transonic Mach numbers for shock location and trailing edge boattail pressure recovery on a single-engine afterbody/nozzle model.
A simple code for use in shielding and radiation dosage analyses
NASA Technical Reports Server (NTRS)
Wan, C. C.
1972-01-01
A simple code for use in analyses of gamma radiation effects in laminated materials is described. Simple and good geometry is assumed so that all multiple collision and scattering events are excluded from consideration. The code is capable of handling laminates up to six layers. However, for laminates of more than six layers, the same code may be used to incorporate two additional layers at a time, making use of punch-tape outputs from previous computation on all preceding layers. Spectrum of attenuated radiation are obtained as both printed output and punch tape output as desired.
A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding
NASA Astrophysics Data System (ADS)
Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae
2017-12-01
High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.
Positron follow-up in liquid water: I. A new Monte Carlo track-structure code.
Champion, C; Le Loirec, C
2006-04-07
When biological matter is irradiated by charged particles, a wide variety of interactions occur, which lead to a deep modification of the cellular environment. To understand the fine structure of the microscopic distribution of energy deposits, Monte Carlo event-by-event simulations are particularly suitable. However, the development of these track-structure codes needs accurate interaction cross sections for all the electronic processes: ionization, excitation, positronium formation and even elastic scattering. Under these conditions, we have recently developed a Monte Carlo code for positrons in water, the latter being commonly used to simulate the biological medium. All the processes are studied in detail via theoretical differential and total cross-section calculations performed by using partial wave methods. Comparisons with existing theoretical and experimental data in terms of stopping powers, mean energy transfers and ranges show very good agreements. Moreover, thanks to the theoretical description of positronium formation, we have access, for the first time, to the complete kinematics of the electron capture process. Then, the present Monte Carlo code is able to describe the detailed positronium history, which will provide useful information for medical imaging (like positron emission tomography) where improvements are needed to define with the best accuracy the tumoural volumes.
Hybrid parallel code acceleration methods in full-core reactor physics calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Courau, T.; Plagne, L.; Ponicot, A.
2012-07-01
When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.
2014-10-01
Purdue-Lin scheme is a relatively sophisticated microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme includes six classes of hydro meteors: water vapor, cloud water, raid, cloud ice, snow and graupel. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. In this paper, we accelerate the Purdue Lin scheme using Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi is a high performance coprocessor consists of up to 61 cores. The Xeon Phi is connected to a CPU via the PCI Express (PICe) bus. In this paper, we will discuss in detail the code optimization issues encountered while tuning the Purdue-Lin microphysics Fortran code for Xeon Phi. In particularly, getting a good performance required utilizing multiple cores, the wide vector operations and make efficient use of memory. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 4.2x. Furthermore, the same optimizations improved performance on Intel Xeon E5-2603 CPU by a factor of 1.2x compared to the original code.
Structural Affects on the Slamming Pressures of High-Speed Planing Craft
NASA Astrophysics Data System (ADS)
Ikeda, Christine; Taravella, Brandon; Judge, Carolyn
2015-11-01
High-speed planing craft are subjected to repeated slamming events in waves that can be very extreme depending on the wave topography, impact angle of the ship, forward speed of the ship, encounter angle, and height out of the water. The current work examines this fluid-structure interaction problem through the use of wedge drop experiments and a CFD code. In the first set of experiments, a rigid 20-degree deadrise angle wedge was dropped from a range of heights (0 <= H <= 0 . 6 m) and while pressures and accelerations of the slam even were measured. The second set of experiments involved a flexible-bottom 15-degree deadrise angle wedge that was dropped from from the same range of heights. In these second experiments, the pressures, accelerations, and strain field were measured. Both experiments are compared with a non-linear boundary value flat cylinder theory code in order to compare the pressure loading. The code assumes a rigid structure, therefore, the results between the code and the first experiment are in good agreement. The second experiment shows pressure magnitudes that are lower than the predictions due to the energy required to deform the structure. Funding from University of New Orleans Office of Research and Sponsored Programs and the Office of Naval Research.
Distributed Coding/Decoding Complexity in Video Sensor Networks
Cordeiro, Paulo J.; Assunção, Pedro
2012-01-01
Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972
Distributed coding/decoding complexity in video sensor networks.
Cordeiro, Paulo J; Assunção, Pedro
2012-01-01
Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.
A local-circulation model for Darrieus vertical-axis wind turbines
NASA Astrophysics Data System (ADS)
Masse, B.
1986-04-01
A new computational model for the aerodynamics of the vertical-axis wind turbine is presented. Based on the local-circulation method generalized for curved blades, combined with a wake model for the vertical-axis wind turbine, it differs markedly from current models based on variations in the streamtube momentum and vortex models using the lifting-line theory. A computer code has been developed to calculate the loads and performance of the Darrieus vertical-axis wind turbine. The results show good agreement with experimental data and compare well with other methods.
International business communications via Intelsat K-band transponders
NASA Astrophysics Data System (ADS)
Hagmann, W.; Rhodes, S.; Fang, R.
This paper discusses how the transponder throughput and the required earth station HPA power in the Intelsat Business Services Network vary as a function of coding rate and required fade margin. The results indicate that transponder throughputs of 40 to 50 Mbit/s are achievable. A comparison of time domain simulation results with results based on a straightforward link analysis shows that the link analysis results may be fairly optimistic if the satellite traveling wave tube amplifier (TWTA) is operated near saturation; however, there is good agreement for large backoffs.
Normative lessons: codes of conduct, self-regulation and the law.
Parker, Malcolm H
2010-06-07
Good medical practice: a code of conduct for doctors in Australia provides uniform standards to be applied in relation to complaints about doctors to the new Medical Board of Australia. The draft Code was criticised for being prescriptive. The final Code employs apparently less authoritative wording than the draft Code, but the implicit obligations it contains are no less prescriptive. Although the draft Code was thought to potentially undermine trust in doctors, and stifle professional judgement in relation to individual patients, its general obligations always allowed for flexibility of application, depending on the circumstances of individual patients. Professional codes may contain some aspirational statements, but they always contain authoritative ones, and they share this feature with legal codes. In successfully diluting the apparent prescriptivity of the draft Code, the profession has lost an opportunity to demonstrate its commitment to the raison d'etre of self-regulation - the protection of patients. Professional codes are not opportunities for reflection, consideration and debate, but are outcomes of these activities.
1981-05-01
PROFESSIONAL PAPER 306 / May 1981 WHAT GOOD ARE WARFARE MODELS? Thomas E. Anger DTICS E LECTE ,JUN 2198 1 j CENTER FOR NAVAL ANALYSES 81 6 19 025 V...WHAT GOOD ARE WARFARE MODELS? Thomas E. /Anger J Accession For !ETIS GRA&I DTIC TAB thonnounceldŕ 5 By-C Availability Codes iAva il aand/or Di1st...least flows from a life-or-death incenLive to make good guesses when choosing weapons, forces, or strategies. It is not surprising, however, that
Comparing TCV experimental VDE responses with DINA code simulations
NASA Astrophysics Data System (ADS)
Favez, J.-Y.; Khayrutdinov, R. R.; Lister, J. B.; Lukash, V. E.
2002-02-01
The DINA free-boundary equilibrium simulation code has been implemented for TCV, including the full TCV feedback and diagnostic systems. First results showed good agreement with control coil perturbations and correctly reproduced certain non-linear features in the experimental measurements. The latest DINA code simulations, presented in this paper, exploit discharges with different cross-sectional shapes and different vertical instability growth rates which were subjected to controlled vertical displacement events (VDEs), extending previous work with the DINA code on the DIII-D tokamak. The height of the TCV vessel allows observation of the non-linear evolution of the VDE growth rate as regions of different vertical field decay index are crossed. The vertical movement of the plasma is found to be well modelled. For most experiments, DINA reproduces the S-shape of the vertical displacement in TCV with excellent precision. This behaviour cannot be modelled using linear time-independent models because of the predominant exponential shape due to the unstable pole of any linear time-independent model. The other most common equilibrium parameters like the plasma current Ip, the elongation κ, the triangularity δ, the safety factor q, the ratio between the averaged plasma kinetic pressure and the pressure of the poloidal magnetic field at the edge of the plasma βp, and the internal self inductance li also show acceptable agreement. The evolution of the growth rate γ is estimated and compared with the evolution of the closed-loop growth rate calculated with the RZIP linear model, confirming the origin of the observed behaviour.
77 FR 32901 - State Enforcement of Household Goods Consumer Protection
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-04
... enforce certain consumer protection provisions of Title 49 of the United States Code (U.S.C.) and related... bring civil actions in the U.S. district courts to enforce the consumer protection provisions that apply..., 386, and 387 State Enforcement of Household Goods Consumer Protection AGENCY: Federal Motor Carrier...
What Good Predictors of Marijuana Use Are Good For: A Synthesis of Research.
ERIC Educational Resources Information Center
Derzon, James H.; Lipsey, Mark W.
1999-01-01
Analyzes correlates of marijuana use based on 3,690 effect sizes coded from 86 prospective longitudinal studies. Summarizes findings on strength of relationships for categorizing predictor variables, and implications of these relationships. Findings are relevant for intervention programmers and policymakers since they identify characteristics of…
Visual and Auditory Memory: Relationships to Reading Achievement.
ERIC Educational Resources Information Center
Bruning, Roger H.; And Others
1978-01-01
Good and poor readers' visual and auditory memory were tested. No group differences existed for single mode presentation in recognition frequency or latency. With multimodal presentation, good readers had faster latencies. Dual coding and self-terminating memory search hypotheses were supported. Implications for the reading process and reading…
28 CFR 2.6 - Withheld and forfeited good time.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 28 Judicial Administration 1 2011-07-01 2011-07-01 false Withheld and forfeited good time. 2.6 Section 2.6 Judicial Administration DEPARTMENT OF JUSTICE PAROLE, RELEASE, SUPERVISION AND RECOMMITMENT OF PRISONERS, YOUTH OFFENDERS, AND JUVENILE DELINQUENTS United States Code Prisoners and Parolees § 2.6...
28 CFR 2.6 - Withheld and forfeited good time.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Withheld and forfeited good time. 2.6 Section 2.6 Judicial Administration DEPARTMENT OF JUSTICE PAROLE, RELEASE, SUPERVISION AND RECOMMITMENT OF PRISONERS, YOUTH OFFENDERS, AND JUVENILE DELINQUENTS United States Code Prisoners and Parolees § 2.6...
NASA Technical Reports Server (NTRS)
Muffoletto, A. J.
1982-01-01
An aerodynamic computer code, capable of predicting unsteady and C sub m values for an airfoil undergoing dynamic stall, is used to predict the amplitudes and frequencies of a wing undergoing torsional stall flutter. The code, developed at United Technologies Research Corporation (UTRC), is an empirical prediction method designed to yield unsteady values of normal force and moment, given the airfoil's static coefficient characteristics and the unsteady aerodynamic values, alpha, A and B. In this experiment, conducted in the PSU 4' x 5' subsonic wind tunnel, the wing's elastic axis, torsional spring constant and initial angle of attack are varied, and the oscillation amplitudes and frequencies of the wing, while undergoing torsional stall flutter, are recorded. These experimental values show only fair comparisons with the predicted responses. Predictions tend to be good at low velocities and rather poor at higher velocities.
Wolf-Rayet stars, black holes and the first detected gravitational wave source
NASA Astrophysics Data System (ADS)
Bogomazov, A. I.; Cherepashchuk, A. M.; Lipunov, V. M.; Tutukov, A. V.
2018-01-01
The recently discovered burst of gravitational waves GW150914 provides a good new chance to verify the current view on the evolution of close binary stars. Modern population synthesis codes help to study this evolution from two main sequence stars up to the formation of two final remnant degenerate dwarfs, neutron stars or black holes (Masevich and Tutukov, 1988). To study the evolution of the GW150914 predecessor we use the ;Scenario Machine; code presented by Lipunov et al. (1996). The scenario modeling conducted in this study allowed to describe the evolution of systems for which the final stage is a massive BH+BH merger. We find that the initial mass of the primary component can be 100÷140M⊙ and the initial separation of the components can be 50÷350R⊙. Our calculations show the plausibility of modern evolutionary scenarios for binary stars and the population synthesis modeling based on it.
Modelling of thermal shock experiments of carbon based materials in JUDITH
NASA Astrophysics Data System (ADS)
Ogorodnikova, O. V.; Pestchanyi, S.; Koza, Y.; Linke, J.
2005-03-01
The interaction of hot plasma with material in fusion devices can result in material erosion and irreversible damage. Carbon based materials are proposed for ITER divertor armour. To simulate carbon erosion under high heat fluxes, electron beam heating in the JUDITH facility has been used. In this paper, carbon erosion under energetic electron impact is modeled by the 3D thermomechanics code 'PEGASUS-3D'. The code is based on a crack generation induced by thermal stress. The particle emission observed in thermal shock experiments is a result of breaking bonds between grains caused by thermal stress. The comparison of calculations with experimental data from JUDITH shows good agreement for various incident power densities and pulse durations. A realistic mean failure stress has been found. Pre-heating of test specimens results in earlier onset of brittle destruction and enhanced particle loss in agreement with experiments.
Stability properties and fast ion confinement of hybrid tokamak plasma configurations
NASA Astrophysics Data System (ADS)
Graves, J. P.; Brunetti, D.; Pfefferle, D.; Faustin, J. M. P.; Cooper, W. A.; Kleiner, A.; Lanthaler, S.; Patten, H. W.; Raghunathan, M.
2015-11-01
In hybrid scenarios with flat q just above unity, extremely fast growing tearing modes are born from toroidal sidebands of the near resonant ideal internal kink mode. New scalings of the growth rate with the magnetic Reynolds number arise from two fluid effects and sheared toroidal flow. Non-linear saturated 1/1 dominant modes obtained from initial value stability calculation agree with the amplitude of the 1/1 component of a 3D VMEC equilibrium calculation. Viable and realistic equilibrium representation of such internal kink modes allow fast ion studies to be accurately established. Calculations of MAST neutral beam ion distributions using the VENUS-LEVIS code show very good agreement of observed impaired core fast ion confinement when long lived modes occur. The 3D ICRH code SCENIC also enables the establishment of minority RF distributions in hybrid plasmas susceptible to saturated near resonant internal kink modes.
NASA Technical Reports Server (NTRS)
Om, Deepak; Childs, Morris E.
1987-01-01
An experimental study is described in which detailed wall pressure measurements have been obtained for compressible three-dimensional unseparated boundary layer flow in annular diffusers with and without normal shock waves. Detailed mean flow-field data were also obtained for the diffuser flow without a shock wave. Two diffuser flows with shock waves were investigated. In one case, the normal shock existed over the complete annulus whereas in the second case, the shock existed over a part of the annulus. The data obtained can be used to validate computational codes for predicting such flow fields. The details of the flow field without the shock wave show flow reversal in the circumferential direction on both inner and outer surfaces. However, there is a lag in the flow reversal between the inner nad the outer surfaces. This is an interesting feature of this flow and should be a good test for the computational codes.
NASA Technical Reports Server (NTRS)
Cebeci, T.; Chen, H. H.; Kaups, K.; Schimke, S.; Shin, J.
1992-01-01
A method for computing ice shapes along the leading edge of a wing and a method for predicting its aerodynamic performance degradation due to icing is described. Ice shapes are computed using an extension of the LEWICE code which was developed for airfoils. The aerodynamic properties of the iced wing are determined with an interactive scheme in which the solutions of the inviscid flow equations are obtained from a panel method and the solutions of the viscous flow equations are obtained from an inverse three-dimensional finite-difference boundary-layer method. A new interaction law is used to couple the inviscid and viscous flow solutions. The application of the LEWICE wing code to the calculation of ice shapes on a MS-317 swept wing shows good agreement with measurements. The interactive boundary-layer method is applied to a tapered ice wing in order to study the effect of icing on the aerodynamic properties of the wing at several angles of attack.
Viscous investigation of a flapping foil propulsor
NASA Astrophysics Data System (ADS)
Posri, Attapol; Phoemsapthawee, Surasak; Thaweewat, Nonthipat
2018-01-01
Inspired by how fishes propel themselves, a flapping-foil device is invented as an alternative propulsion system for ships and boats. The performance of such propulsor has been formerly investigated using a potential flow code. The simulation results have shown that the device has high propulsive efficiency over a wide range of operation. However, the potential flow gives good results only when flow separation is not present. In case of high flapping frequency, the flow separation can occur over a short instant due to fluid viscosity and high angle of attack. This may cause a reduction of propulsive efficiency. A commercial CFD code based on Lattice Boltzmann Method, XFlow, is then employed in order to investigate the viscous effect over the propulsive performance of the flapping foil. The viscous results agree well with the potential flow results, confirming the high efficiency of the propulsor. As expected, viscous results show lower efficiency in high flapping frequency zone.
Subspace Arrangement Codes and Cryptosystems
2011-05-09
any other prov1sion of law, no person shall be subject to any penalty for failing to comply w1th a collection of information if it does not display a...NUMBER OF PAGES 49 19a. NAME OF RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std...theory is finding codes that have a small number of digits (length) with a high number codewords (dimension), as well as good error-correction properties
Observations on Polar Coding with CRC-Aided List Decoding
2016-09-01
9 v 1. INTRODUCTION Polar codes are a new type of forward error correction (FEC) codes, introduced by Arikan in [1], in which he...error correction (FEC) currently used and planned for use in Navy wireless communication systems. The project’s results from FY14 and FY15 are...good error- correction per- formance. We used the Tal/Vardy method of [5]. The polar encoder uses a row vector u of length N . Let uA be the subvector
Radiation in Space and Its Control of Equilibrium Temperatures in the Solar System
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
2004-01-01
The problem of determining equilibrium temperatures for reradiating surfaces in space vacuum was analyzed and the resulting mathematical relationships were incorporated in a code to determine space sink temperatures in the solar system. A brief treatment of planetary atmospheres is also included. Temperature values obtained with the code are in good agreement with available spacecraft telemetry and meteorological measurements for Venus and Earth. The code has been used in the design of space power system radiators for future interplanetary missions.
TEMPEST code simulations of hydrogen distribution in reactor containment structures. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trent, D.S.; Eyler, L.L.
The mass transport version of the TEMPEST computer code was used to simulate hydrogen distribution in geometric configurations relevant to reactor containment structures. Predicted results of Battelle-Frankfurt hydrogen distribution tests 1 to 6, and 12 are presented. Agreement between predictions and experimental data is good. Best agreement is obtained using the k-epsilon turbulence model in TEMPEST in flow cases where turbulent diffusion and stable stratification are dominant mechanisms affecting transport. The code's general analysis capabilities are summarized.
Lattice Boltzmann Model of 3D Multiphase Flow in Artery Bifurcation Aneurysm Problem
Abas, Aizat; Mokhtar, N. Hafizah; Ishak, M. H. H.; Abdullah, M. Z.; Ho Tian, Ang
2016-01-01
This paper simulates and predicts the laminar flow inside the 3D aneurysm geometry, since the hemodynamic situation in the blood vessels is difficult to determine and visualize using standard imaging techniques, for example, magnetic resonance imaging (MRI). Three different types of Lattice Boltzmann (LB) models are computed, namely, single relaxation time (SRT), multiple relaxation time (MRT), and regularized BGK models. The results obtained using these different versions of the LB-based code will then be validated with ANSYS FLUENT, a commercially available finite volume- (FV-) based CFD solver. The simulated flow profiles that include velocity, pressure, and wall shear stress (WSS) are then compared between the two solvers. The predicted outcomes show that all the LB models are comparable and in good agreement with the FVM solver for complex blood flow simulation. The findings also show minor differences in their WSS profiles. The performance of the parallel implementation for each solver is also included and discussed in this paper. In terms of parallelization, it was shown that LBM-based code performed better in terms of the computation time required. PMID:27239221
NASA Astrophysics Data System (ADS)
Fei, Huang; Xu-hong, Jin; Jun-ming, Lv; Xiao-li, Cheng
2016-11-01
An attempt has been made to analyze impact of Martian atmosphere parameter uncertainties on entry vehicle aerodynamics for hypersonic rarefied conditions with a DSMC code. The code has been validated by comparing Viking vehicle flight data with present computational results. Then, by simulating flows around the Mars Science Laboratory, the impact of errors of free stream parameter uncertainties on aerodynamics is investigated. The validation results show that the present numerical approach can show good agreement with the Viking flight data. The physical and chemical properties of CO2 has strong impact on aerodynamics of Mars entry vehicles, so it is necessary to make proper corrections to the data obtained with air model in hypersonic rarefied conditions, which is consistent with the conclusions drawn in continuum regime. Uncertainties of free stream density and velocity weakly influence aerodynamics and pitching moment. However, aerodynamics appears to be little influenced by free stream temperature, the maximum error of what is below 0.5%. Center of pressure position is not sensitive to free stream parameters.
Efficient burst image compression using H.265/HEVC
NASA Astrophysics Data System (ADS)
Roodaki-Lavasani, Hoda; Lainema, Jani
2014-02-01
New imaging use cases are emerging as more powerful camera hardware is entering consumer markets. One family of such use cases is based on capturing multiple pictures instead of just one when taking a photograph. That kind of a camera operation allows e.g. selecting the most successful shot from a sequence of images, showing what happened right before or after the shot was taken or combining the shots by computational means to improve either visible characteristics of the picture (such as dynamic range or focus) or the artistic aspects of the photo (e.g. by superimposing pictures on top of each other). Considering that photographic images are typically of high resolution and quality and the fact that these kind of image bursts can consist of at least tens of individual pictures, an efficient compression algorithm is desired. However, traditional video coding approaches fail to provide the random access properties these use cases require to achieve near-instantaneous access to the pictures in the coded sequence. That feature is critical to allow users to browse the pictures in an arbitrary order or imaging algorithms to extract desired pictures from the sequence quickly. This paper proposes coding structures that provide such random access properties while achieving coding efficiency superior to existing image coders. The results indicate that using HEVC video codec with a single reference picture fixed for the whole sequence can achieve nearly as good compression as traditional IPPP coding structures. It is also shown that the selection of the reference frame can further improve the coding efficiency.
Evaluation of Force Transfer Around Openings - Experimental and Analytical Studies
Borjen Yeh; Tom Skaggs; Frank Lam; Minghao Li; Douglas Rammer; James Wacker
2011-01-01
Wood structural panel (WSP) sheathed shear walls and diaphragms are the primary lateral-load-resistingelements in wood-frame construction. The historical performance of light-frame structures in North America is very good due, in part, to model building codes that are designed to safeguard life safety. These model building codes have spawned continual improvement and...
Modelling Force Transfer Around Openings of Full-Scale Shear Walls
Tom Skaggs; Borjen Yeh; Frank Lam; Minghao Li; Doug Rammer; James Wacker
2011-01-01
Wood structural panel (WSP) sheathed shear walls and diaphragms are the primary lateralload-resisting elements in wood-frame construction. The historical performance of lightframe structures in North America has been very good due, in part, to model building codes that are designed to preserve life safety. These model building codes have spawned continual improvement...
ERIC Educational Resources Information Center
Buckland, Roger
2004-01-01
The Lambert Model Code of Governance proposes to institutionalise the dominance of governors from commercial and industrial organisations as core members of compact and effective boards controlling UK universities. It is the latest expression of a fashion for viewing university governance as an overly-simple example of an obsolete system, where…
The Apparel Industry and Codes of Conduct: A Solution to the International Child Labor Problem?
ERIC Educational Resources Information Center
Bureau of International Labor Affairs (DOL), Washington, DC.
Corporate codes of conduct prohibiting the use of child labor are becoming more common as consumers are increasingly calling upon companies to take responsibility for the conditions under which the goods they sell are manufactured. This report (the third volume in the Bureau of International Labor Affairs' international child labor series) details…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurosu, K; Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka; Takashina, M
Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximummore » step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health, Labor and Welfare of Japan, Grants-in-Aid for Scientific Research (No. 23791419), and JSPS Core-to-Core program (No. 23003). The authors have no conflict of interest.« less
Clinical wisdom: the essential foundation of "good" nursing care.
Haggerty, Lois A; Grace, Pamela
2008-01-01
Clinical wisdom, an essential foundation of nursing care that provides for the "good" of individual patients while taking into account the common good, is a concept that is difficult to define and comprehend. However, understanding what constitutes clinical wisdom is essential for the education of the types of nurses who are most likely to provide leadership that is consistent with the goals of nursing as outlined in the 2005 Code of Ethics for Nurses of the International Council of Nurses and the 2001 Code of Ethics for Nurses With Interpretive Statements of the American Nurses Association. The three key elements of wisdom, derived from the psychology and philosophy literature, are (1) balancing and providing for the good of another and the common good, (2) the use of intellect and affect in problem solving, and (3) the demonstration of experience-based tacit knowing in problematic situations. We conceptualized clinical wisdom as a more specific variant of general wisdom by examining how the core elements described can be linked to wisdom for nursing practice. In doing so, the nature of clinical wisdom is clarified and strategies are suggested to assist nurse educators in developing wise nurses.
Numerical optimization of three-dimensional coils for NSTX-U
NASA Astrophysics Data System (ADS)
Lazerson, S. A.; Park, J.-K.; Logan, N.; Boozer, A.
2015-10-01
A tool for the calculation of optimal three-dimensional (3D) perturbative magnetic fields in tokamaks has been developed. The IPECOPT code builds upon the stellarator optimization code STELLOPT to allow for optimization of linear ideal magnetohydrodynamic perturbed equilibrium (IPEC). This tool has been applied to NSTX-U equilibria, addressing which fields are the most effective at driving NTV torques. The NTV torque calculation is performed by the PENT code. Optimization of the normal field spectrum shows that fields with n = 1 character can drive a large core torque. It is also shown that fields with n = 3 features are capable of driving edge torque and some core torque. Coil current optimization (using the planned in-vessel and existing RWM coils) on NSTX-U suggest the planned coils set is adequate for core and edge torque control. Comparison between error field correction experiments on DIII-D and the optimizer show good agreement. Notice: This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy. The publisher, by accepting the article for publication acknowledges, that the United States Government retains a non-exclusive,paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
NASA Technical Reports Server (NTRS)
Lee, P. J.
1984-01-01
For rate 1/N convolutional codes, a recursive algorithm for finding the transfer function bound on bit error rate (BER) at the output of a Viterbi decoder is described. This technique is very fast and requires very little storage since all the unnecessary operations are eliminated. Using this technique, we find and plot bounds on the BER performance of known codes of rate 1/2 with K 18, rate 1/3 with K 14. When more than one reported code with the same parameter is known, we select the code that minimizes the required signal to noise ratio for a desired bit error rate of 0.000001. This criterion of determining goodness of a code had previously been found to be more useful than the maximum free distance criterion and was used in the code search procedures of very short constraint length codes. This very efficient technique can also be used for searches of longer constraint length codes.
Culture shock: Improving software quality
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Jong, K.; Trauth, S.L.
1988-01-01
The concept of software quality can represent a significant shock to an individual who has been developing software for many years and who believes he or she has been doing a high quality job. The very idea that software includes lines of code and associated documentation is foreign and difficult to grasp, at best. Implementation of a software quality program hinges on the concept that software is a product whose quality needs improving. When this idea is introduced into a technical community that is largely ''self-taught'' and has been producing ''good'' software for some time, a fundamental understanding of themore » concepts associated with software is often weak. Software developers can react as if to say, ''What are you talking about. What do you mean I'm not doing a good job. I haven't gotten any complaints about my code yetexclamation'' Coupling such surprise and resentment with the shock that software really is a product and software quality concepts do exist, can fuel the volatility of these emotions. In this paper, we demonstrate that the concept of software quality can indeed pose a culture shock to developers. We also show that a ''typical'' quality assurance approach, that of imposing a standard and providing inspectors and auditors to assure its adherence, contributes to this shock and detracts from the very goal the approach should achieve. We offer an alternative, adopted through experience, to implement a software quality program: cooperative assistance. We show how cooperation, education, consultation and friendly assistance can overcome this culture shock. 3 refs.« less
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Kwok, R.; Curlander, J. C.
1987-01-01
Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.
High-efficiency reconciliation for continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, Zengliang; Yang, Shenshen; Li, Yongmin
2017-04-01
Quantum key distribution (QKD) is the most mature application of quantum information technology. Information reconciliation is a crucial step in QKD and significantly affects the final secret key rates shared between two legitimate parties. We analyze and compare various construction methods of low-density parity-check (LDPC) codes and design high-performance irregular LDPC codes with a block length of 106. Starting from these good codes and exploiting the slice reconciliation technique based on multilevel coding and multistage decoding, we realize high-efficiency Gaussian key reconciliation with efficiency higher than 95% for signal-to-noise ratios above 1. Our demonstrated method can be readily applied in continuous variable QKD.
NASA Rotor 37 CFD Code Validation: Glenn-HT Code
NASA Technical Reports Server (NTRS)
Ameri, Ali A.
2010-01-01
In order to advance the goals of NASA aeronautics programs, it is necessary to continuously evaluate and improve the computational tools used for research and design at NASA. One such code is the Glenn-HT code which is used at NASA Glenn Research Center (GRC) for turbomachinery computations. Although the code has been thoroughly validated for turbine heat transfer computations, it has not been utilized for compressors. In this work, Glenn-HT was used to compute the flow in a transonic compressor and comparisons were made to experimental data. The results presented here are in good agreement with this data. Most of the measures of performance are well within the measurement uncertainties and the exit profiles of interest agree with the experimental measurements.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., ARRANGEMENT, AND OTHER PROVISIONS FOR CERTAIN DANGEROUS CARGOES IN BULK Portable Tanks § 98.30-2 Definitions..., (Phone (44 020 7735 7611); Web site: http://www.imo.org.) (1) International Maritime Dangerous Goods... Dangerous Goods (IMDG) Code, 2012 Edition, Section: 6.7.2 through 6.7.2.20.3, IBR approved for § 98.30-5. ...
Turkish Pre-Service Social Studies Teachers' Perceptions of "Good" Citizenship
ERIC Educational Resources Information Center
Yesilbursa, Cemil Cahit
2015-01-01
The current study explores Turkish pre-service social studies teachers' perceptions of "good" citizenship. The participants were 580 pre-service social studies teachers from 6 different universities in Turkey. The data were collected through an interview form having one open-ended question and analyzed according to open coding procedure.…
12 CFR 201.110 - Goods held by persons employed by owner.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Board has taken into consideration the changes that have occurred in commercial law and practice since 1933. Modern commercial law, embodied in the Uniform Commercial Code, refers to “perfecting security interests” rather than “securing title” to goods. The Board believes that if, under State law, the issuance...
Ravinetto, Raffaella; De Nys, Katelijne; Boelaert, Marleen; Diro, Ermias; Meintjes, Graeme; Adoke, Yeka; Tagbor, Harry; Casteels, Minne
2015-12-30
Non-commercial clinical research plays an increasingly essential role for global health. Multiple partners join in international consortia that operate under the limited timeframe of a specific funding period. One organisation (the sponsor) designs and carries out the trial in collaboration with research partners, and is ultimately responsible for the trial's scientific, ethical, regulatory and legal aspects, while another organization, generally in the North (the funder), provides the external funding and sets funding conditions. Even if external funding mechanisms are key for most non-commercial research, the dependence on an external funder's policies may heavily influence the choices of a sponsor. In addition, the competition for accessing the available external funds is great, and non-commercial sponsors may not be in a position to discuss or refuse standard conditions set by a funder. To see whether the current definitions adequately address the intricacies of sponsorship in externally-funded trials, we looked at how a "sponsor" of clinical trials is defined in selected international guidelines, with particular focus on international Good Clinical Practices codes, and in selected European and African regulations/legislations. Our limited analysis suggests that the sponsors definition from the 1995 WHO Good Clinical Practices code has been integrated as such into many legislations, guidelines and regulations, and that it is not adequate to cover today's reality of funding arrangements in global health, where the legal responsibility and the funding source are de facto split. In agreement with other groups, we suggest that the international Good Clinical Practices codes should be updated to reflect the reality of non-commercial clinical research. In particular, they should explicitly include the distinction between commercial and non-commercial sponsors, and provide guidance to non-commercial sponsors for negotiating with external funding agencies and other research counterparts. Non-commercial sponsors of clinical trials should surely invest in the development of adequate legal, administrative and management skills. By acknowledging their role and specificities, and by providing them with adapted guidance, the international Good Clinical Practices codes would provide valuable guidance and support to non-commercial clinical research, whose relevance for global health is increasingly evident.
Rate-compatible punctured convolutional codes (RCPC codes) and their applications
NASA Astrophysics Data System (ADS)
Hagenauer, Joachim
1988-04-01
The concept of punctured convolutional codes is extended by punctuating a low-rate 1/N code periodically with period P to obtain a family of codes with rate P/(P + l), where l can be varied between 1 and (N - 1)P. A rate-compatibility restriction on the puncturing tables ensures that all code bits of high rate codes are used by the lower-rate codes. This allows transmission of incremental redundancy in ARQ/FEC (automatic repeat request/forward error correction) schemes and continuous rate variation to change from low to high error protection within a data frame. Families of RCPC codes with rates between 8/9 and 1/4 are given for memories M from 3 to 6 (8 to 64 trellis states) together with the relevant distance spectra. These codes are almost as good as the best known general convolutional codes of the respective rates. It is shown that the same Viterbi decoder can be used for all RCPC codes of the same M. The application of RCPC codes to hybrid ARQ/FEC schemes is discussed for Gaussian and Rayleigh fading channels using channel-state information to optimize throughput.
NASA Technical Reports Server (NTRS)
Goldman, L. J.; Seasholtz, R. G.
1982-01-01
Experimental measurements of the velocity components in the blade to blade (axial tangential) plane were obtained with an axial flow turbine stator passage and were compared with calculations from three turbomachinery computer programs. The theoretical results were calculated from a quasi three dimensional inviscid code, a three dimensional inviscid code, and a three dimensional viscous code. Parameter estimation techniques and a particle dynamics calculation were used to assess the accuracy of the laser measurements, which allow a rational basis for comparison of the experimenal and theoretical results. The general agreement of the experimental data with the results from the two inviscid computer codes indicates the usefulness of these calculation procedures for turbomachinery blading. The comparison with the viscous code, while generally reasonable, was not as good as for the inviscid codes.
Labyrinth Seal Flutter Analysis and Test Validation in Support of Robust Rocket Engine Design
NASA Technical Reports Server (NTRS)
El-Aini, Yehia; Park, John; Frady, Greg; Nesman, Tom
2010-01-01
High energy-density turbomachines, like the SSME turbopumps, utilize labyrinth seals, also referred to as knife-edge seals, to control leakage flow. The pressure drop for such seals is order of magnitude higher than comparable jet engine seals. This is aggravated by the requirement of tight clearances resulting in possible unfavorable fluid-structure interaction of the seal system (seal flutter). To demonstrate these characteristics, a benchmark case of a High Pressure Oxygen Turbopump (HPOTP) outlet Labyrinth seal was studied in detail. First, an analytical assessment of the seal stability was conducted using a Pratt & Whitney legacy seal flutter code. Sensitivity parameters including pressure drop, rotor-to-stator running clearances and cavity volumes were examined and modeling strategies established. Second, a concurrent experimental investigation was undertaken to validate the stability of the seal at the equivalent operating conditions of the pump. Actual pump hardware was used to construct the test rig, also referred to as the (Flutter Rig). The flutter rig did not include rotational effects or temperature. However, the use of Hydrogen gas at high inlet pressure provided good representation of the critical parameters affecting flutter especially the speed of sound. The flutter code predictions showed consistent trends in good agreement with the experimental data. The rig test program produced a stability threshold empirical parameter that separated operation with and without flutter. This empirical parameter was used to establish the seal build clearances to avoid flutter while providing the required cooling flow metering. The calibrated flutter code along with the empirical flutter parameter was used to redesign the baseline seal resulting in a flutter-free robust configuration. Provisions for incorporation of mechanical damping devices were introduced in the redesigned seal to ensure added robustness
2014-04-01
successes against the Roman army, Plutarch focused on Sertorius’ ability to bring good order and discipline to the seemingly barbaric tribes of the...Roman frontier.161 Plutarch noted that after the campaigns against Rome, Sertorius was “highly honored for his introducing discipline and good order...Sertorius’ ability to achieve good order and discipline within his troops, Plutarch did not mention discipline or fear; instead, he noted that Sertorius
Understanding and Writing G & M Code for CNC Machines
ERIC Educational Resources Information Center
Loveland, Thomas
2012-01-01
In modern CAD and CAM manufacturing companies, engineers design parts for machines and consumable goods. Many of these parts are cut on CNC machines. Whether using a CNC lathe, milling machine, or router, the ideas and designs of engineers must be translated into a machine-readable form called G & M Code that can be used to cut parts to precise…
How L2-Learners' Brains React to Code-Switches: An ERP Study with Russian Learners of German
ERIC Educational Resources Information Center
Ruigendijk, Esther; Hentschel, Gerd; Zeller, Jan Patrick
2016-01-01
This Event Related Potentials (ERP) study investigates auditory processing of sentences with so-called code-switches in Russian learners of German. It has often been argued that switching between two languages results in extra processing cost, although it is not completely clear yet what exactly causes these costs. ERP presents a good method to…
DRA/NASA/ONERA Collaboration on Icing Research. Part 2; Prediction of Airfoil Ice Accretion
NASA Technical Reports Server (NTRS)
Wright, William B.; Gent, R. W.; Guffond, Didier
1997-01-01
This report presents results from a joint study by DRA, NASA, and ONERA for the purpose of comparing, improving, and validating the aircraft icing computer codes developed by each agency. These codes are of three kinds: (1) water droplet trajectory prediction, (2) ice accretion modeling, and (3) transient electrothermal deicer analysis. In this joint study, the agencies compared their code predictions with each other and with experimental results. These comparison exercises were published in three technical reports, each with joint authorship. DRA published and had first authorship of Part 1 - Droplet Trajectory Calculations, NASA of Part 2 - Ice Accretion Prediction, and ONERA of Part 3 - Electrothermal Deicer Analysis. The results cover work done during the period from August 1986 to late 1991. As a result, all of the information in this report is dated. Where necessary, current information is provided to show the direction of current research. In this present report on ice accretion, each agency predicted ice shapes on two dimensional airfoils under icing conditions for which experimental ice shapes were available. In general, all three codes did a reasonable job of predicting the measured ice shapes. For any given experimental condition, one of the three codes predicted the general ice features (i.e., shape, impingement limits, mass of ice) somewhat better than did the other two. However, no single code consistently did better than the other two over the full range of conditions examined, which included rime, mixed, and glaze ice conditions. In several of the cases, DRA showed that the user's knowledge of icing can significantly improve the accuracy of the code prediction. Rime ice predictions were reasonably accurate and consistent among the codes, because droplets freeze on impact and the freezing model is simple. Glaze ice predictions were less accurate and less consistent among the codes, because the freezing model is more complex and is critically dependent upon unsubstantiated heat transfer and surface roughness models. Thus, heat transfer prediction methods used in the codes became the subject for a separate study in this report to compare predicted heat transfer coefficients with a limited experimental database of heat transfer coefficients for cylinders with simulated glaze and rime ice shapes. The codes did a good job of predicting heat transfer coefficients near the stagnation region of the ice shapes. But in the region of the ice horns, all three codes predicted heat transfer coefficients considerably higher than the measured values. An important conclusion of this study is that further research is needed to understand the finer detail of of the glaze ice accretion process and to develop improved glaze ice accretion models.
NASA Technical Reports Server (NTRS)
Dolinar, S.
1988-01-01
Over the past six to eight years, an extensive research effort was conducted to investigate advanced coding techniques which promised to yield more coding gain than is available with current NASA standard codes. The delay in Galileo's launch due to the temporary suspension of the shuttle program provided the Galileo project with an opportunity to evaluate the possibility of including some version of the advanced codes as a mission enhancement option. A study was initiated last summer to determine if substantial coding gain was feasible for Galileo and, is so, to recommend a suitable experimental code for use as a switchable alternative to the current NASA-standard code. The Galileo experimental code study resulted in the selection of a code with constant length 15 and rate 1/4. The code parameters were chosen to optimize performance within cost and risk constraints consistent with retrofitting the new code into the existing Galileo system design and launch schedule. The particular code was recommended after a very limited search among good codes with the chosen parameters. It will theoretically yield about 1.5 dB enhancement under idealizing assumptions relative to the current NASA-standard code at Galileo's desired bit error rates. This ideal predicted gain includes enough cushion to meet the project's target of at least 1 dB enhancement under real, non-ideal conditions.
SATCOM antenna siting study on a P-3C using the NEC-BSC V3.1
NASA Technical Reports Server (NTRS)
Bensman, D.; Marhefka, R. J.
1990-01-01
The location of a UHF SATCOM antenna on a P-3C aircraft is studied using the NEC-Basic Scattering Code V3.1 (NEC-BSC3). The NEC-BSC3 is a computer code based on the uniform theory of diffraction. The code is first validated for this application using scale model measurements. In general, the comparisons are good except in 10 degree regions near the nose and tail of the aircraft. Patterns for various antenna locations are analyzed to achieve a prescripted performance.
Reference View Selection in DIBR-Based Multiview Coding.
Maugey, Thomas; Petrazzuoli, Giovanni; Frossard, Pascal; Cagnazzo, Marco; Pesquet-Popescu, Beatrice
2016-04-01
Augmented reality, interactive navigation in 3D scenes, multiview video, and other emerging multimedia applications require large sets of images, hence larger data volumes and increased resources compared with traditional video services. The significant increase in the number of images in multiview systems leads to new challenging problems in data representation and data transmission to provide high quality of experience on resource-constrained environments. In order to reduce the size of the data, different multiview video compression strategies have been proposed recently. Most of them use the concept of reference or key views that are used to estimate other images when there is high correlation in the data set. In such coding schemes, the two following questions become fundamental: 1) how many reference views have to be chosen for keeping a good reconstruction quality under coding cost constraints? And 2) where to place these key views in the multiview data set? As these questions are largely overlooked in the literature, we study the reference view selection problem and propose an algorithm for the optimal selection of reference views in multiview coding systems. Based on a novel metric that measures the similarity between the views, we formulate an optimization problem for the positioning of the reference views, such that both the distortion of the view reconstruction and the coding rate cost are minimized. We solve this new problem with a shortest path algorithm that determines both the optimal number of reference views and their positions in the image set. We experimentally validate our solution in a practical multiview distributed coding system and in the standardized 3D-HEVC multiview coding scheme. We show that considering the 3D scene geometry in the reference view, positioning problem brings significant rate-distortion improvements and outperforms the traditional coding strategy that simply selects key frames based on the distance between cameras.
A novel data processing technique for image reconstruction of penumbral imaging
NASA Astrophysics Data System (ADS)
Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin
2011-06-01
CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.
Evaluation of Grid Modification Methods for On- and Off-Track Sonic Boom Analysis
NASA Technical Reports Server (NTRS)
Nayani, Sudheer N.; Campbell, Richard L.
2013-01-01
Grid modification methods have been under development at NASA to enable better predictions of low boom pressure signatures from supersonic aircraft. As part of this effort, two new codes, Stretched and Sheared Grid - Modified (SSG) and Boom Grid (BG), have been developed in the past year. The CFD results from these codes have been compared with ones from the earlier grid modification codes Stretched and Sheared Grid (SSGRID) and Mach Cone Aligned Prism (MCAP) and also with the available experimental results. NASA's unstructured grid suite of software TetrUSS and the automatic sourcing code AUTOSRC were used for base grid generation and flow solutions. The BG method has been evaluated on three wind tunnel models. Pressure signatures have been obtained up to two body lengths below a Gulfstream aircraft wind tunnel model. Good agreement with the wind tunnel results have been obtained for both on-track and off-track (up to 53 degrees) cases. On-track pressure signatures up to ten body lengths below a Straight Line Segmented Leading Edge (SLSLE) wind tunnel model have been extracted. Good agreement with the wind tunnel results have been obtained. Pressure signatures have been obtained at 1.5 body lengths below a Lockheed Martin aircraft wind tunnel model. Good agreement with the wind tunnel results have been obtained for both on-track and off-track (up to 40 degrees) cases. Grid sensitivity studies have been carried out to investigate any grid size related issues. Methods have been evaluated for fully turbulent, mixed laminar/turbulent and fully laminar flow conditions.
Testing of the ABBN-RF multigroup data library in photon transport calculations
NASA Astrophysics Data System (ADS)
Koscheev, Vladimir; Lomakov, Gleb; Manturov, Gennady; Tsiboulia, Anatoly
2017-09-01
Gamma radiation is produced via both of nuclear fuel and shield materials. Photon interaction is known with appropriate accuracy, but secondary gamma ray production known much less. The purpose of this work is studying secondary gamma ray production data from neutron induced reactions in iron and lead by using MCNP code and modern nuclear data as ROSFOND, ENDF/B-7.1, JEFF-3.2 and JENDL-4.0. Results of calculations show that all of these nuclear data have different photon production data from neutron induced reactions and have poor agreement with evaluated benchmark experiment. The ABBN-RF multigroup cross-section library is based on the ROSFOND data. It presented in two forms of micro cross sections: ABBN and MATXS formats. Comparison of group-wise calculations using both ABBN and MATXS data to point-wise calculations with the ROSFOND library shows a good agreement. The discrepancies between calculation and experimental C/E results in neutron spectra are in the limit of experimental errors. For the photon spectrum they are out of experimental errors. Results of calculations using group-wise and point-wise representation of cross sections show a good agreement both for photon and neutron spectra.
The public goods hypothesis for the evolution of life on Earth
2011-01-01
It is becoming increasingly difficult to reconcile the observed extent of horizontal gene transfers with the central metaphor of a great tree uniting all evolving entities on the planet. In this manuscript we describe the Public Goods Hypothesis and show that it is appropriate in order to describe biological evolution on the planet. According to this hypothesis, nucleotide sequences (genes, promoters, exons, etc.) are simply seen as goods, passed from organism to organism through both vertical and horizontal transfer. Public goods sequences are defined by having the properties of being largely non-excludable (no organism can be effectively prevented from accessing these sequences) and non-rival (while such a sequence is being used by one organism it is also available for use by another organism). The universal nature of genetic systems ensures that such non-excludable sequences exist and non-excludability explains why we see a myriad of genes in different combinations in sequenced genomes. There are three features of the public goods hypothesis. Firstly, segments of DNA are seen as public goods, available for all organisms to integrate into their genomes. Secondly, we expect the evolution of mechanisms for DNA sharing and of defense mechanisms against DNA intrusion in genomes. Thirdly, we expect that we do not see a global tree-like pattern. Instead, we expect local tree-like patterns to emerge from the combination of a commonage of genes and vertical inheritance of genomes by cell division. Indeed, while genes are theoretically public goods, in reality, some genes are excludable, particularly, though not only, when they have variant genetic codes or behave as coalition or club goods, available for all organisms of a coalition to integrate into their genomes, and non-rival within the club. We view the Tree of Life hypothesis as a regionalized instance of the Public Goods hypothesis, just like classical mechanics and euclidean geometry are seen as regionalized instances of quantum mechanics and Riemannian geometry respectively. We argue for this change using an axiomatic approach that shows that the Public Goods hypothesis is a better accommodation of the observed data than the Tree of Life hypothesis. PMID:21861918
The Public Goods Hypothesis for the evolution of life on Earth.
McInerney, James O; Pisani, Davide; Bapteste, Eric; O'Connell, Mary J
2011-08-23
It is becoming increasingly difficult to reconcile the observed extent of horizontal gene transfers with the central metaphor of a great tree uniting all evolving entities on the planet. In this manuscript we describe the Public Goods Hypothesis and show that it is appropriate in order to describe biological evolution on the planet. According to this hypothesis, nucleotide sequences (genes, promoters, exons, etc.) are simply seen as goods, passed from organism to organism through both vertical and horizontal transfer. Public goods sequences are defined by having the properties of being largely non-excludable (no organism can be effectively prevented from accessing these sequences) and non-rival (while such a sequence is being used by one organism it is also available for use by another organism). The universal nature of genetic systems ensures that such non-excludable sequences exist and non-excludability explains why we see a myriad of genes in different combinations in sequenced genomes. There are three features of the public goods hypothesis. Firstly, segments of DNA are seen as public goods, available for all organisms to integrate into their genomes. Secondly, we expect the evolution of mechanisms for DNA sharing and of defense mechanisms against DNA intrusion in genomes. Thirdly, we expect that we do not see a global tree-like pattern. Instead, we expect local tree-like patterns to emerge from the combination of a commonage of genes and vertical inheritance of genomes by cell division. Indeed, while genes are theoretically public goods, in reality, some genes are excludable, particularly, though not only, when they have variant genetic codes or behave as coalition or club goods, available for all organisms of a coalition to integrate into their genomes, and non-rival within the club. We view the Tree of Life hypothesis as a regionalized instance of the Public Goods hypothesis, just like classical mechanics and euclidean geometry are seen as regionalized instances of quantum mechanics and Riemannian geometry respectively. We argue for this change using an axiomatic approach that shows that the Public Goods hypothesis is a better accommodation of the observed data than the Tree of Life hypothesis.
Masuda, Naoki
2009-12-01
Selective attention is often accompanied by gamma oscillations in local field potentials and spike field coherence in brain areas related to visual, motor, and cognitive information processing. Gamma oscillations are implicated to play an important role in, for example, visual tasks including object search, shape perception, and speed detection. However, the mechanism by which gamma oscillations enhance cognitive and behavioral performance of attentive subjects is still elusive. Using feedforward fan-in networks composed of spiking neurons, we examine a possible role for gamma oscillations in selective attention and population rate coding of external stimuli. We implement the concept proposed by Fries ( 2005 ) that under dynamic stimuli, neural populations effectively communicate with each other only when there is a good phase relationship among associated gamma oscillations. We show that the downstream neural population selects a specific dynamic stimulus received by an upstream population and represents it by population rate coding. The encoded stimulus is the one for which gamma rhythm in the corresponding upstream population is resonant with the downstream gamma rhythm. The proposed role for gamma oscillations in stimulus selection is to enable top-down control, a neural version of time division multiple access used in communication engineering.
NASA Technical Reports Server (NTRS)
Feria, Y.; Cheung, K.-M.
1995-01-01
In a time-varying signal-to-noise ration (SNR) environment, symbol rate is often changed to maximize data return. However, the symbol-rate change has some undesirable effects, such as changing the transmission bandwidth and perhaps causing the receiver symbol loop to lose lock temporarily, thus losing some data. In this article, we are proposing an alternate way of varying the data rate without changing the symbol rate and, therefore, the transmission bandwidth. The data rate change is achieved in a seamless fashion by puncturing the convolutionally encoded symbol stream to adapt to the changing SNR environment. We have also derived an exact expression to enumerate the number of distinct puncturing patterns. To demonstrate this seamless rate change capability, we searched for good puncturing patterns for the Galileo (14,1/4) convolutional code and changed the data rates by using the punctured codes to match the Galileo SNR profile of November 9, 1997. We show that this scheme reduces the symbol-rate changes from nine to two and provides a comparable data return in a day and a higher symbol SNR during most of the day.
TRACE/PARCS analysis of the OECD/NEA Oskarshamn-2 BWR stability benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kozlowski, T.; Downar, T.; Xu, Y.
2012-07-01
On February 25, 1999, the Oskarshamn-2 NPP experienced a stability event which culminated in diverging power oscillations with a decay ratio of about 1.4. The event was successfully modeled by the TRACE/PARCS coupled code system, and further analysis of the event is described in this paper. The results show very good agreement with the plant data, capturing the entire behavior of the transient including the onset of instability, growth of the oscillations (decay ratio) and oscillation frequency. This provides confidence in the prediction of other parameters which are not available from the plant records. The event provides coupled code validationmore » for a challenging BWR stability event, which involves the accurate simulation of neutron kinetics (NK), thermal-hydraulics (TH), and TH/NK. coupling. The success of this work has demonstrated the ability of the 3-D coupled systems code TRACE/PARCS to capture the complex behavior of BWR stability events. The problem was released as an international OECD/NEA benchmark, and it is the first benchmark based on measured plant data for a stability event with a DR greater than one. Interested participants are invited to contact authors for more information. (authors)« less
NASA Astrophysics Data System (ADS)
Coindreau, O.; Duriez, C.; Ederli, S.
2010-10-01
Progress in the treatment of air oxidation of zirconium in severe accident (SA) codes are required for a reliable analysis of severe accidents involving air ingress. Air oxidation of zirconium can actually lead to accelerated core degradation and increased fission product release, especially for the highly-radiotoxic ruthenium. This paper presents a model to simulate air oxidation kinetics of Zircaloy-4 in the 600-1000 °C temperature range. It is based on available experimental data, including separate-effect experiments performed at IRSN and at Forschungszentrum Karlsruhe. The kinetic transition, named "breakaway", from a diffusion-controlled regime to an accelerated oxidation is taken into account in the modeling via a critical mass gain parameter. The progressive propagation of the locally initiated breakaway is modeled by a linear increase in oxidation rate with time. Finally, when breakaway propagation is completed, the oxidation rate stabilizes and the kinetics is modeled by a linear law. This new modeling is integrated in the severe accident code ASTEC, jointly developed by IRSN and GRS. Model predictions and experimental data from thermogravimetric results show good agreement for different air flow rates and for slow temperature transient conditions.
A method of estimating GPS instrumental biases with a convolution algorithm
NASA Astrophysics Data System (ADS)
Li, Qi; Ma, Guanyi; Lu, Weijun; Wan, Qingtao; Fan, Jiangtao; Wang, Xiaolan; Li, Jinghua; Li, Changhua
2018-03-01
This paper presents a method of deriving the instrumental differential code biases (DCBs) of GPS satellites and dual frequency receivers. Considering that the total electron content (TEC) varies smoothly over a small area, one ionospheric pierce point (IPP) and four more nearby IPPs were selected to build an equation with a convolution algorithm. In addition, unknown DCB parameters were arranged into a set of equations with GPS observations in a day unit by assuming that DCBs do not vary within a day. Then, the DCBs of satellites and receivers were determined by solving the equation set with the least-squares fitting technique. The performance of this method is examined by applying it to 361 days in 2014 using the observation data from 1311 GPS Earth Observation Network (GEONET) receivers. The result was crosswise-compared with the DCB estimated by the mesh method and the IONEX products from the Center for Orbit Determination in Europe (CODE). The DCB values derived by this method agree with those of the mesh method and the CODE products, with biases of 0.091 ns and 0.321 ns, respectively. The convolution method's accuracy and stability were quite good and showed improvements over the mesh method.
NASA Astrophysics Data System (ADS)
Feria, Y.; Cheung, K.-M.
1994-10-01
In a time-varying signal-to-noise ratio (SNR) environment, symbol rate is often changed to maximize data return. However, the symbol-rate change has some undesirable effects, such as changing the transmission bandwidth and perhaps causing the receiver symbol loop to lose lock temporarily, thus losing some data. In this article, we are proposing an alternate way of varying the data rate without changing the symbol rate and, therefore, the transmission bandwidth. The data rate change is achieved in a seamless fashion by puncturing the convolutionally encoded symbol stream to adapt to the changing SNR environment. We have also derived an exact expression to enumerate the number of distinct puncturing patterns. To demonstrate this seamless rate-change capability, we searched for good puncturing patterns for the Galileo (14,1/4) convolutional code and changed the data rates by using the punctured codes to match the Galileo SNR profile of November 9, 1997. We show that this scheme reduces the symbol-rate changes from nine to two and provides a comparable data return in a day and a higher symbol SNR during most of the day.
NASA Astrophysics Data System (ADS)
Feria, Y.; Cheung, K.-M.
1995-02-01
In a time-varying signal-to-noise ration (SNR) environment, symbol rate is often changed to maximize data return. However, the symbol-rate change has some undesirable effects, such as changing the transmission bandwidth and perhaps causing the receiver symbol loop to lose lock temporarily, thus losing some data. In this article, we are proposing an alternate way of varying the data rate without changing the symbol rate and, therefore, the transmission bandwidth. The data rate change is achieved in a seamless fashion by puncturing the convolutionally encoded symbol stream to adapt to the changing SNR environment. We have also derived an exact expression to enumerate the number of distinct puncturing patterns. To demonstrate this seamless rate change capability, we searched for good puncturing patterns for the Galileo (14,1/4) convolutional code and changed the data rates by using the punctured codes to match the Galileo SNR profile of November 9, 1997. We show that this scheme reduces the symbol-rate changes from nine to two and provides a comparable data return in a day and a higher symbol SNR during most of the day.
LSB-based Steganography Using Reflected Gray Code for Color Quantum Images
NASA Astrophysics Data System (ADS)
Li, Panchi; Lu, Aiping
2018-02-01
At present, the classical least-significant-bit (LSB) based image steganography has been extended to quantum image processing. For the existing LSB-based quantum image steganography schemes, the embedding capacity is no more than 3 bits per pixel. Therefore, it is meaningful to study how to improve the embedding capacity of quantum image steganography. This work presents a novel LSB-based steganography using reflected Gray code for colored quantum images, and the embedding capacity of this scheme is up to 4 bits per pixel. In proposed scheme, the secret qubit sequence is considered as a sequence of 4-bit segments. For the four bits in each segment, the first bit is embedded in the second LSB of B channel of the cover image, and and the remaining three bits are embedded in LSB of RGB channels of each color pixel simultaneously using reflected-Gray code to determine the embedded bit from secret information. Following the transforming rule, the LSB of stego-image are not always same as the secret bits and the differences are up to almost 50%. Experimental results confirm that the proposed scheme shows good performance and outperforms the previous ones currently found in the literature in terms of embedding capacity.
Dynamic Hybrid Simulation of the Lunar Wake During ARTEMIS Crossing
NASA Astrophysics Data System (ADS)
Wiehle, S.; Plaschke, F.; Angelopoulos, V.; Auster, H.; Glassmeier, K.; Kriegel, H.; Motschmann, U. M.; Mueller, J.
2010-12-01
The interaction of the highly dynamic solar wind with the Moon is simulated with the A.I.K.E.F. (Adaptive Ion Kinetic Electron Fluid) code for the ARTEMIS P1 flyby on February 13, 2010. The A.I.K.E.F. hybrid plasma simulation code is the improved version of the Braunschweig code. It is able to automatically increase simulation grid resolution in areas of interest during runtime, which greatly increases resolution as well as performance. As the Moon has no intrinsic magnetic field and no ionosphere, the solar wind particles are absorbed at its surface, resulting in the formation of the lunar wake at the nightside. The solar wind magnetic field is basically convected through the Moon and the wake is slowly filled up with solar wind particles. However, this interaction is strongly influenced by the highly dynamic solar wind during the flyby. This is considered by a dynamic variation of the upstream conditions in the simulation using OMNI solar wind measurement data. By this method, a very good agreement between simulation and observations is achieved. The simulations show that the stationary structure of the lunar wake constitutes a tableau vivant in space representing the well-known Friedrichs diagram for MHD waves.
Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao
2015-01-01
In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization. PMID:26343660
The effect of density fluctuations on electron cyclotron beam broadening and implications for ITER
NASA Astrophysics Data System (ADS)
Snicker, A.; Poli, E.; Maj, O.; Guidi, L.; Köhn, A.; Weber, H.; Conway, G.; Henderson, M.; Saibene, G.
2018-01-01
We present state-of-the-art computations of propagation and absorption of electron cyclotron waves, retaining the effects of scattering due to electron density fluctuations. In ITER, injected microwaves are foreseen to suppress neoclassical tearing modes (NTMs) by driving current at the q=2 and q=3/2 resonant surfaces. Scattering of the beam can spoil the good localization of the absorption and thus impair NTM control capabilities. A novel tool, the WKBeam code, has been employed here in order to investigate this issue. The code is a Monte Carlo solver for the wave kinetic equation and retains diffraction, full axisymmetric tokamak geometry, determination of the absorption profile and an integral form of the scattering operator which describes the effects of turbulent density fluctuations within the limits of the Born scattering approximation. The approach has been benchmarked against the paraxial WKB code TORBEAM and the full-wave code IPF-FDMC. In particular, the Born approximation is found to be valid for ITER parameters. In this paper, we show that the radiative transport of EC beams due to wave scattering in ITER is diffusive unlike in present experiments, thus causing up to a factor of 2-4 broadening in the absorption profile. However, the broadening depends strongly on the turbulence model assumed for the density fluctuations, which still has large uncertainties.
2013-01-01
Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852
Numerical modelling of gravel unconstrained flow experiments with the DAN3D and RASH3D codes
NASA Astrophysics Data System (ADS)
Sauthier, Claire; Pirulli, Marina; Pisani, Gabriele; Scavia, Claudio; Labiouse, Vincent
2015-12-01
Landslide continuum dynamic models have improved considerably in the last years, but a consensus on the best method of calibrating the input resistance parameter values for predictive analyses has not yet emerged. In the present paper, numerical simulations of a series of laboratory experiments performed at the Laboratory for Rock Mechanics of the EPF Lausanne were undertaken with the RASH3D and DAN3D numerical codes. They aimed at analysing the possibility to use calibrated ranges of parameters (1) in a code different from that they were obtained from and (2) to simulate potential-events made of a material with the same characteristics as back-analysed past-events, but involving a different volume and propagation path. For this purpose, one of the four benchmark laboratory tests was used as past-event to calibrate the dynamic basal friction angle assuming a Coulomb-type behaviour of the sliding mass, and this back-analysed value was then used to simulate the three other experiments, assumed as potential-events. The computational findings show good correspondence with experimental results in terms of characteristics of the final deposits (i.e., runout, length and width). Furthermore, the obtained best fit values of the dynamic basal friction angle for the two codes turn out to be close to each other and within the range of values measured with pseudo-dynamic tilting tests.
Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao
2015-08-27
In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.
Sun, Zhoutong; Lonsdale, Richard; Li, Guangyue; Reetz, Manfred T
2016-10-04
Saturation mutagenesis at sites lining the binding pockets of enzymes constitutes a viable protein engineering technique for enhancing or inverting stereoselectivity. Statistical analysis shows that oversampling in the screening step (the bottleneck) increases astronomically as the number of residues in the randomization site increases, which is the reason why reduced amino acid alphabets have been employed, in addition to splitting large sites into smaller ones. Limonene epoxide hydrolase (LEH) has previously served as the experimental platform in these methodological efforts, enabling comparisons between single-code saturation mutagenesis (SCSM) and triple-code saturation mutagenesis (TCSM); these employ either only one or three amino acids, respectively, as building blocks. In this study the comparative platform is extended by exploring the efficacy of double-code saturation mutagenesis (DCSM), in which the reduced amino acid alphabet consists of two members, chosen according to the principles of rational design on the basis of structural information. The hydrolytic desymmetrization of cyclohexene oxide is used as the model reaction, with formation of either (R,R)- or (S,S)-cyclohexane-1,2-diol. DCSM proves to be clearly superior to the likewise tested SCSM, affording both R,R- and S,S-selective mutants. These variants are also good catalysts in reactions of further substrates. Docking computations reveal the basis of enantioselectivity. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A one-dimensional heat transfer model for parallel-plate thermoacoustic heat exchangers.
de Jong, J A; Wijnant, Y H; de Boer, A
2014-03-01
A one-dimensional (1D) laminar oscillating flow heat transfer model is derived and applied to parallel-plate thermoacoustic heat exchangers. The model can be used to estimate the heat transfer from the solid wall to the acoustic medium, which is required for the heat input/output of thermoacoustic systems. The model is implementable in existing (quasi-)1D thermoacoustic codes, such as DeltaEC. Examples of generated results show good agreement with literature results. The model allows for arbitrary wave phasing; however, it is shown that the wave phasing does not significantly influence the heat transfer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edward Nichols
2002-05-03
In this quarter we continued the processing of the Safford IP survey data. The processing identified a time shift problem between the sites that was caused by a GPS firmware error. A software procedure was developed to identify and correct the shift, and this was applied to the data. Preliminary estimates were made of the remote referenced MT parameters, and initial data quality assessment showed the data quality was good for most of the line. The multi-site robust processing code of Egbert was linked to the new data and processing initiated.
Transonic cascade flow prediction using the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Arnone, A.; Stecco, S. S.
1991-01-01
This paper presents results which summarize the work carried out during the last three years to improve the efficiency and accuracy of numerical predictions in turbomachinery flow calculations. A new kind of nonperiodic c-type grid is presented and a Runge-Kutta scheme with accelerating strategies is used as a flow solver. The code capability is presented by testing four different blades at different exit Mach numbers in transonic regimes. Comparison with experiments shows the very good reliability of the numerical prediction. In particular, the loss coefficient seems to be correctly predicted by using the well-known Baldwin-Lomax turbulence model.
New Equation of State Models for Hydrodynamic Applications
NASA Astrophysics Data System (ADS)
Young, David A.; Barbee, Troy W., III; Rogers, Forrest J.
1997-07-01
Accurate models of the equation of state of matter at high pressures and temperatures are increasingly required for hydrodynamic simulations. We have developed two new approaches to accurate EOS modeling: 1) ab initio phonons from electron band structure theory for condensed matter and 2) the ACTEX dense plasma model for ultrahigh pressure shocks. We have studied the diamond and high pressure phases of carbon with the ab initio model and find good agreement between theory and experiment for shock Hugoniots, isotherms, and isobars. The theory also predicts a comprehensive phase diagram for carbon. For ultrahigh pressure shock states, we have studied the comparison of ACTEX theory with experiments for deuterium, beryllium, polystyrene, water, aluminum, and silicon dioxide. The agreement is good, showing that complex multispecies plasmas are treated adequately by the theory. These models will be useful in improving the numerical EOS tables used by hydrodynamic codes.
Treatment provider's knowledge of the Health and Disability Commissioner's Code of Consumer Rights.
Townshend, Philip L; Sellman, J Douglas
2002-06-01
The Health and Disability Commissioner's (HDC) Code of Health and and Disability Consumers' Rights (the Code) defines in law the rights of consumers of health and disability services in New Zealand. In the first few years after the publication health educators, service providers and the HDC extensively promoted the Code. Providers of health and disability services would be expected to be knowledgeable about the areas covered by the Code if it is routinely used in the development and monitoring of treatment plans. In this study knowledge of the Code was tested in a random sample of 217 clinical staff that included medical staff, psychologists and counsellors working in Alcohol and Drug Treatment (A&D) centres in New Zealand. Any response showing awareness of a right, regardless of wording, was taken as a positive response as it was the areas covered by rights rather than their actual wording that was considered to be the important knowledge for providers. The main finding of this research was that 23% of staff surveyed were aware of none of the ten rights in the Code and only 6% were aware of more than five of the ten rights. Relating these data to results from a wider sample of treatment providers raises the possibility that A&D treatment providers are slightly more aware of the content of the Code than a general sample of health and disability service providers however overall awareness of the content of the Code by health providers is very low. These results imply that consumer rights issues are not prominent in the minds of providers perhaps indicating an ethical blind spot on their part. Ignorance of the content of the Code may indicate that the treatment community do not find it a useful working document or alternatively that clinicians are content to rely on their own good intentions to preserve the rights of their patients. Further research will be required to explain this lack of knowledge, however the current situation is that consumers cannot rely on clinicians being aware of the consumer's rights in health and disability services.
Video streaming with SHVC to HEVC transcoding
NASA Astrophysics Data System (ADS)
Gudumasu, Srinivas; He, Yuwen; Ye, Yan; Xiu, Xiaoyu
2015-09-01
This paper proposes an efficient Scalable High efficiency Video Coding (SHVC) to High Efficiency Video Coding (HEVC) transcoder, which can reduce the transcoding complexity significantly, and provide a desired trade-off between the transcoding complexity and the transcoded video quality. To reduce the transcoding complexity, some of coding information, such as coding unit (CU) depth, prediction mode, merge mode, motion vector information, intra direction information and transform unit (TU) depth information, in the SHVC bitstream are mapped and transcoded to single layer HEVC bitstream. One major difficulty in transcoding arises when trying to reuse the motion information from SHVC bitstream since motion vectors referring to inter-layer reference (ILR) pictures cannot be reused directly in transcoding. Reusing motion information obtained from ILR pictures for those prediction units (PUs) will reduce the complexity of the SHVC transcoder greatly but a significant reduction in the quality of the picture is observed. Pictures corresponding to the intra refresh pictures in the base layer (BL) will be coded as P pictures in enhancement layer (EL) in the SHVC bitstream; and directly reusing the intra information from the BL for transcoding will not get a good coding efficiency. To solve these problems, various transcoding technologies are proposed. The proposed technologies offer different trade-offs between transcoding speed and transcoding quality. They are implemented on the basis of reference software SHM-6.0 and HM-14.0 for the two layer spatial scalability configuration. Simulations show that the proposed SHVC software transcoder reduces the transcoding complexity by up to 98-99% using low complexity transcoding mode when compared with cascaded re-encoding method. The transcoder performance at various bitrates with different transcoding modes are compared in terms of transcoding speed and transcoded video quality.
Tanana, Michael; Hallgren, Kevin A; Imel, Zac E; Atkins, David C; Srikumar, Vivek
2016-06-01
Motivational interviewing (MI) is an efficacious treatment for substance use disorders and other problem behaviors. Studies on MI fidelity and mechanisms of change typically use human raters to code therapy sessions, which requires considerable time, training, and financial costs. Natural language processing techniques have recently been utilized for coding MI sessions using machine learning techniques, rather than human coders, and preliminary results have suggested these methods hold promise. The current study extends this previous work by introducing two natural language processing models for automatically coding MI sessions via computer. The two models differ in the way they semantically represent session content, utilizing either 1) simple discrete sentence features (DSF model) and 2) more complex recursive neural networks (RNN model). Utterance- and session-level predictions from these models were compared to ratings provided by human coders using a large sample of MI sessions (N=341 sessions; 78,977 clinician and client talk turns) from 6 MI studies. Results show that the DSF model generally had slightly better performance compared to the RNN model. The DSF model had "good" or higher utterance-level agreement with human coders (Cohen's kappa>0.60) for open and closed questions, affirm, giving information, and follow/neutral (all therapist codes); considerably higher agreement was obtained for session-level indices, and many estimates were competitive with human-to-human agreement. However, there was poor agreement for client change talk, client sustain talk, and therapist MI-inconsistent behaviors. Natural language processing methods provide accurate representations of human derived behavioral codes and could offer substantial improvements to the efficiency and scale in which MI mechanisms of change research and fidelity monitoring are conducted. Copyright © 2016 Elsevier Inc. All rights reserved.
Mubeen; K.R., Vijayalakshmi; Bhuyan, Sanat Kumar; Panigrahi, Rajat G; Priyadarshini, Smita R; Misra, Satyaranjan; Singh, Chandravir
2014-01-01
Objectives: The identification and radiographic interpretation of periapical bone lesions is important for accurate diagnosis and treatment. The present study was undertaken to study the feasibility and diagnostic accuracy of colour coded digital radiographs in terms of presence and size of lesion and to compare the diagnostic accuracy of colour coded digital images with direct digital images and conventional radiographs for assessing periapical lesions. Materials and Methods: Sixty human dry cadaver hemimandibles were obtained and periapical lesions were created in first and second premolar teeth at the junction of cancellous and cortical bone using a micromotor handpiece and carbide burs of sizes 2, 4 and 6. After each successive use of round burs, a conventional, RVG and colour coded image was taken for each specimen. All the images were evaluated by three observers. The diagnostic accuracy for each bur and image mode was calculated statistically. Results: Our results showed good interobserver (kappa > 0.61) agreement for the different radiographic techniques and for the different bur sizes. Conventional Radiography outperformed Digital Radiography in diagnosing periapical lesions made with Size two bur. Both were equally diagnostic for lesions made with larger bur sizes. Colour coding method was least accurate among all the techniques. Conclusion: Conventional radiography traditionally forms the backbone in the diagnosis, treatment planning and follow-up of periapical lesions. Direct digital imaging is an efficient technique, in diagnostic sense. Colour coding of digital radiography was feasible but less accurate however, this imaging technique, like any other, needs to be studied continuously with the emphasis on safety of patients and diagnostic quality of images. PMID:25584318
Measurement of neutron spectra in the AWE workplace using a Bonner sphere spectrometer.
Danyluk, Peter
2010-12-01
A Bonner sphere spectrometer has been used to measure the neutron spectra in eight different workplace areas at AWE (Atomic Weapons Establishment). The spectra were analysed by the National Physical Laboratory using their principal unfolding code STAY'SL and the results were also analysed by AWE using a bespoke parametrised unfolding code. The bespoke code was designed specifically for the AWE workplace and is very simple to use. Both codes gave results, in good agreement. It was found that the measured fluence rate varied from 2 to 70 neutrons cm⁻² s⁻¹ (± 10%) and the ambient dose equivalent H*(10) varied from 0.5 to 57 µSv h⁻¹ (± 20%). A detailed description of the development and use of the bespoke code is presented.
Numerical simulation of experiments in the Giant Planet Facility
NASA Technical Reports Server (NTRS)
Green, M. J.; Davy, W. C.
1979-01-01
Utilizing a series of existing computer codes, ablation experiments in the Giant Planet Facility are numerically simulated. Of primary importance is the simulation of the low Mach number shock layer that envelops the test model. The RASLE shock-layer code, used in the Jupiter entry probe heat-shield design, is adapted to the experimental conditions. RASLE predictions for radiative and convective heat fluxes are in good agreement with calorimeter measurements. In simulating carbonaceous ablation experiments, the RASLE code is coupled directly with the CMA material response code. For the graphite models, predicted and measured recessions agree very well. Predicted recession for the carbon phenolic models is 50% higher than that measured. This is the first time codes used for the Jupiter probe design have been compared with experiments.
Adaptive decoding of convolutional codes
NASA Astrophysics Data System (ADS)
Hueske, K.; Geldmacher, J.; Götze, J.
2007-06-01
Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.
NASA Technical Reports Server (NTRS)
August, Richard; Kaza, Krishna Rao V.
1988-01-01
An investigation of the vibration, performance, flutter, and forced response of the large-scale propfan, SR7L, and its aeroelastic model, SR7A, has been performed by applying available structural and aeroelastic analytical codes and then correlating measured and calculated results. Finite element models of the blades were used to obtain modal frequencies, displacements, stresses and strains. These values were then used in conjunction with a 3-D, unsteady, lifting surface aerodynamic theory for the subsequent aeroelastic analyses of the blades. The agreement between measured and calculated frequencies and mode shapes for both models is very good. Calculated power coefficients correlate well with those measured for low advance ratios. Flutter results show that both propfans are stable at their respective design points. There is also good agreement between calculated and measured blade vibratory strains due to excitation resulting from yawed flow for the SR7A propfan. The similarity of structural and aeroelastic results show that the SR7A propfan simulates the SR7L characteristics.
Unsteady-flow-field predictions for oscillating cascades
NASA Technical Reports Server (NTRS)
Huff, Dennis L.
1991-01-01
The unsteady flow field around an oscillating cascade of flat plates with zero stagger was studied by using a time marching Euler code. This case had an exact solution based on linear theory and served as a model problem for studying pressure wave propagation in the numerical solution. The importance of using proper unsteady boundary conditions, grid resolution, and time step size was shown for a moderate reduced frequency. Results show that an approximate nonreflecting boundary condition based on linear theory does a good job of minimizing reflections from the inflow and outflow boundaries and allows the placement of the boundaries to be closer to the airfoils than when reflective boundaries are used. Stretching the boundary to dampen the unsteady waves is another way to minimize reflections. Grid clustering near the plates captures the unsteady flow field better than when uniform grids are used as long as the 'Courant Friedrichs Levy' (CFL) number is less than 1 for a sufficient portion of the grid. Finally, a solution based on an optimization of grid, CFL number, and boundary conditions shows good agreement with linear theory.
TOUGH Simulations of the Updegraff's Set of Fluid and Heat Flow Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moridis, G.J.; Pruess
1992-11-01
The TOUGH code [Pruess, 1987] for two-phase flow of water, air, and heat in penneable media has been exercised on a suite of test problems originally selected and simulated by C. D. Updegraff [1989]. These include five 'verification' problems for which analytical or numerical solutions are available, and three 'validation' problems that model laboratory fluid and heat flow experiments. All problems could be run without any code modifications (*). Good and efficient numerical performance, as well as accurate results were obtained throughout. Additional code verification and validation problems from the literature are briefly summarized, and suggestions are given for propermore » applications of TOUGH and related codes.« less
Low bit rate coding of Earth science images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.
1993-01-01
In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.
NASA Technical Reports Server (NTRS)
Hartenstein, Richard G., Jr.
1985-01-01
Computer codes have been developed to analyze antennas on aircraft and in the presence of scatterers. The purpose of this study is to use these codes to develop accurate computer models of various aircraft and antenna systems. The antenna systems analyzed are a P-3B L-Band antenna, an A-7E UHF relay pod antenna, and traffic advisory antenna system installed on a Bell Long Ranger helicopter. Computer results are compared to measured ones with good agreement. These codes can be used in the design stage of an antenna system to determine the optimum antenna location and save valuable time and costly flight hours.
Cooperative optimization and their application in LDPC codes
NASA Astrophysics Data System (ADS)
Chen, Ke; Rong, Jian; Zhong, Xiaochun
2008-10-01
Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.
NASA Astrophysics Data System (ADS)
Barlas, Thanasis; Jost, Eva; Pirrung, Georg; Tsiantas, Theofanis; Riziotis, Vasilis; Navalkar, Sachin T.; Lutz, Thorsten; van Wingerden, Jan-Willem
2016-09-01
Simulations of a stiff rotor configuration of the DTU 10MW Reference Wind Turbine are performed in order to assess the impact of prescribed flap motion on the aerodynamic loads on a blade sectional and rotor integral level. Results of the engineering models used by DTU (HAWC2), TUDelft (Bladed) and NTUA (hGAST) are compared to the CFD predictions of USTUTT-IAG (FLOWer). Results show fairly good comparison in terms of axial loading, while alignment of tangential and drag-related forces across the numerical codes needs to be improved, together with unsteady corrections associated with rotor wake dynamics. The use of a new wake model in HAWC2 shows considerable accuracy improvements.
Recovering 3D particle size distributions from 2D sections
NASA Astrophysics Data System (ADS)
Cuzzi, Jeffrey N.; Olson, Daniel M.
2017-03-01
We discuss different ways to convert observed, apparent particle size distributions from 2D sections (thin sections, SEM maps on planar surfaces, etc.) into true 3D particle size distributions. We give a simple, flexible, and practical method to do this; show which of these techniques gives the most faithful conversions; and provide (online) short computer codes to calculate both 2D-3D recoveries and simulations of 2D observations by random sectioning. The most important systematic bias of 2D sectioning, from the standpoint of most chondrite studies, is an overestimate of the abundance of the larger particles. We show that fairly good recoveries can be achieved from observed size distributions containing 100-300 individual measurements of apparent particle diameter.
PARC Navier-Stokes code upgrade and validation for high speed aeroheating predictions
NASA Technical Reports Server (NTRS)
Liver, Peter A.; Praharaj, Sarat C.; Seaford, C. Mark
1990-01-01
Applications of the PARC full Navier-Stokes code for hypersonic flowfield and aeroheating predictions around blunt bodies such as the Aeroassist Flight Experiment (AFE) and Aeroassisted Orbital Transfer Vehicle (AOTV) are evaluated. Two-dimensional/axisymmetric and three-dimensional perfect gas versions of the code were upgraded and tested against benchmark wind tunnel cases of hemisphere-cylinder, three-dimensional AFE forebody, and axisymmetric AFE and AOTV aerobrake/wake flowfields. PARC calculations are in good agreement with experimental data and results of similar computer codes. Difficulties encountered in flowfield and heat transfer predictions due to effects of grid density, boundary conditions such as singular stagnation line axis and artificial dissipation terms are presented together with subsequent improvements made to the code. The experience gained with the perfect gas code is being currently utilized in applications of an equilibrium air real gas PARC version developed at REMTECH.
Towards industrial-strength Navier-Stokes codes
NASA Technical Reports Server (NTRS)
Jou, Wen-Huei; Wigton, Laurence B.; Allmaras, Steven R.
1992-01-01
In this paper we discuss our experiences with Navier-Stokes (NS) codes using central differencing (CD) and scalar artificial dissipation (SAD). The NS-CDSAD codes have been developed by several researchers. Our results confirm that for typical commercial transport wing and wing/body configurations flying at transonic conditions with all turbulent boundary layers, NS-CDSAD codes, when used with the Johnson-King turbulence model, are capable of computing pressure distributions in excellent agreement with experimental data. However, results are not as good when laminar boundary layers are present. Exhaustive 2-D grid refinement studies supported by detailed analysis suggest that the numerical errors associated with SAD severely contaminate the solution in the laminar portion of the boundary layer. It is left as a challenge to the CFD community to find and fix the problems with Navier-Stokes codes and to produce a NS code which converges reliably and properly captures the laminar portion of the boundary layer on a reasonable grid.
BADGER v1.0: A Fortran equation of state library
NASA Astrophysics Data System (ADS)
Heltemes, T. A.; Moses, G. A.
2012-12-01
The BADGER equation of state library was developed to enable inertial confinement fusion plasma codes to more accurately model plasmas in the high-density, low-temperature regime. The code had the capability to calculate 1- and 2-T plasmas using the Thomas-Fermi model and an individual electron accounting model. Ion equation of state data can be calculated using an ideal gas model or via a quotidian equation of state with scaled binding energies. Electron equation of state data can be calculated via the ideal gas model or with an adaptation of the screened hydrogenic model with ℓ-splitting. The ionization and equation of state calculations can be done in local thermodynamic equilibrium or in a non-LTE mode using a variant of the Busquet equivalent temperature method. The code was written as a stand-alone Fortran library for ease of implementation by external codes. EOS results for aluminum are presented that show good agreement with the SESAME library and ionization calculations show good agreement with the FLYCHK code. Program summaryProgram title: BADGERLIB v1.0 Catalogue identifier: AEND_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEND_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 41 480 No. of bytes in distributed program, including test data, etc.: 2 904 451 Distribution format: tar.gz Programming language: Fortran 90. Computer: 32- or 64-bit PC, or Mac. Operating system: Windows, Linux, MacOS X. RAM: 249.496 kB plus 195.630 kB per isotope record in memory Classification: 19.1, 19.7. Nature of problem: Equation of State (EOS) calculations are necessary for the accurate simulation of high energy density plasmas. Historically, most EOS codes used in these simulations have relied on an ideal gas model. This model is inadequate for low-temperature, high-density plasma conditions; the gaseous and liquid phases; and the solid phase. The BADGER code was developed to give more realistic EOS data in these regimes. Solution method: BADGER has multiple, user-selectable models to treat the ions, average-atom ionization state and electrons. Ion models are ideal gas and quotidian equation of state (QEOS), ionization models are Thomas-Fermi and individual accounting method (IEM) formulation of the screened hydrogenic model (SHM) with l-splitting, electron ionization models are ideal gas and a Helmholtz free energy minimization method derived from the SHM. The default equation of state and ionization models are appropriate for plasmas in local thermodynamic equilibrium (LTE). The code can calculate non-LTE equation of state (EOS) and ionization data using a simplified form of the Busquet equivalent-temperature method. Restrictions: Physical data are only provided for elements Z=1 to Z=86. Multiple solid phases are not currently supported. Liquid, gas and plasma phases are combined into a generalized "fluid" phase. Unusual features: BADGER divorces the calculation of average-atom ionization from the electron equation of state model, allowing the user to select ionization and electron EOS models that are most appropriate to the simulation. The included ion ideal gas model uses ground-state nuclear spin data to differentiate between isotopes of a given element. Running time: Example provided only takes a few seconds to run.
NASA Astrophysics Data System (ADS)
Zhang, H.; Fang, H.; Yao, H.; Maceira, M.; van der Hilst, R. D.
2014-12-01
Recently, Zhang et al. (2014, Pure and Appiled Geophysics) have developed a joint inversion code incorporating body-wave arrival times and surface-wave dispersion data. The joint inversion code was based on the regional-scale version of the double-difference tomography algorithm tomoDD. The surface-wave inversion part uses the propagator matrix solver in the algorithm DISPER80 (Saito, 1988) for forward calculation of dispersion curves from layered velocity models and the related sensitivities. The application of the joint inversion code to the SAFOD site in central California shows that the fault structure is better imaged in the new model, which is able to fit both the body-wave and surface-wave observations adequately. Here we present a new joint inversion method that solves the model in the wavelet domain constrained by sparsity regularization. Compared to the previous method, it has the following advantages: (1) The method is both data- and model-adaptive. For the velocity model, it can be represented by different wavelet coefficients at different scales, which are generally sparse. By constraining the model wavelet coefficients to be sparse, the inversion in the wavelet domain can inherently adapt to the data distribution so that the model has higher spatial resolution in the good data coverage zone. Fang and Zhang (2014, Geophysical Journal International) have showed the superior performance of the wavelet-based double-difference seismic tomography method compared to the conventional method. (2) For the surface wave inversion, the joint inversion code takes advantage of the recent development of direct inversion of surface wave dispersion data for 3-D variations of shear wave velocity without the intermediate step of phase or group velocity maps (Fang et al., 2014, Geophysical Journal International). A fast marching method is used to compute, at each period, surface wave traveltimes and ray paths between sources and receivers. We will test the new joint inversion code at the SAFOD site to compare its performance over the previous code. We will also select another fault zone such as the San Jacinto Fault Zone to better image its structure.
A Sequential Fluid-mechanic Chemical-kinetic Model of Propane HCCI Combustion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aceves, S M; Flowers, D L; Martinez-Frias, J
2000-11-29
We have developed a methodology for predicting combustion and emissions in a Homogeneous Charge Compression Ignition (HCCI) Engine. This methodology combines a detailed fluid mechanics code with a detailed chemical kinetics code. Instead of directly linking the two codes, which would require an extremely long computational time, the methodology consists of first running the fluid mechanics code to obtain temperature profiles as a function of time. These temperature profiles are then used as input to a multi-zone chemical kinetics code. The advantage of this procedure is that a small number of zones (10) is enough to obtain accurate results. Thismore » procedure achieves the benefits of linking the fluid mechanics and the chemical kinetics codes with a great reduction in the computational effort, to a level that can be handled with current computers. The success of this procedure is in large part a consequence of the fact that for much of the compression stroke the chemistry is inactive and thus has little influence on fluid mechanics and heat transfer. Then, when chemistry is active, combustion is rather sudden, leaving little time for interaction between chemistry and fluid mixing and heat transfer. This sequential methodology has been capable of explaining the main characteristics of HCCI combustion that have been observed in experiments. In this paper, we use our model to explore an HCCI engine running on propane. The paper compares experimental and numerical pressure traces, heat release rates, and hydrocarbon and carbon monoxide emissions. The results show an excellent agreement, even in parameters that are difficult to predict, such as chemical heat release rates. Carbon monoxide emissions are reasonably well predicted, even though it is intrinsically difficult to make good predictions of CO emissions in HCCI engines. The paper includes a sensitivity study on the effect of the heat transfer correlation on the results of the analysis. Importantly, the paper also shows a numerical study on how parameters such as swirl rate, crevices and ceramic walls could help in reducing HC and CO emissions from HCCI engines.« less
Lee, Jin Hee; Hong, Ki Jeong; Kim, Do Kyun; Kwak, Young Ho; Jang, Hye Young; Kim, Hahn Bom; Noh, Hyun; Park, Jungho; Song, Bongkyu; Jung, Jae Yun
2013-12-01
A clinically sensible diagnosis grouping system (DGS) is needed for describing pediatric emergency diagnoses for research, medical resource preparedness, and making national policy for pediatric emergency medical care. The Pediatric Emergency Care Applied Research Network (PECARN) developed the DGS successfully. We developed the modified PECARN DGS based on the different pediatric population of South Korea and validated the system to obtain the accurate and comparable epidemiologic data of pediatric emergent conditions of the selected population. The data source used to develop and validate the modified PECARN DGS was the National Emergency Department Information System of South Korea, which was coded by the International Classification of Diseases, 10th Revision (ICD-10) code system. To develop the modified DGS based on ICD-10 code, we matched the selected ICD-10 codes with those of the PECARN DGS by the General Equivalence Mappings (GEMs). After converting ICD-10 codes to ICD-9 codes by GEMs, we matched ICD-9 codes into PECARN DGS categories using the matrix developed by PECARN group. Lastly, we conducted the expert panel survey using Delphi method for the remaining diagnosis codes that were not matched. A total of 1879 ICD-10 codes were used in development of the modified DGS. After 1078 (57.4%) of 1879 ICD-10 codes were assigned to the modified DGS by GEM and PECARN conversion tools, investigators assigned each of the remaining 801 codes (42.6%) to DGS subgroups by 2 rounds of electronic Delphi surveys. And we assigned the remaining 29 codes (4%) into the modified DGS at the second expert consensus meeting. The modified DGS accounts for 98.7% and 95.2% of diagnoses of the 2008 and 2009 National Emergency Department Information System data set. This modified DGS also exhibited strong construct validity using the concepts of age, sex, site of care, and seasons. This also reflected the 2009 outbreak of H1N1 influenza in Korea. We developed and validated clinically feasible and sensible DGS system for describing pediatric emergent conditions in Korea. The modified PECARN DGS showed good comprehensiveness and demonstrated reliable construct validity. This modified DGS based on PECARN DGS framework may be effectively implemented for research, reporting, and resource planning in pediatric emergency system of South Korea.
Multi-code analysis of scrape-off layer filament dynamics in MAST
NASA Astrophysics Data System (ADS)
Militello, F.; Walkden, N. R.; Farley, T.; Gracias, W. A.; Olsen, J.; Riva, F.; Easy, L.; Fedorczak, N.; Lupelli, I.; Madsen, J.; Nielsen, A. H.; Ricci, P.; Tamain, P.; Young, J.
2016-11-01
Four numerical codes are employed to investigate the dynamics of scrape-off layer filaments in tokamak relevant conditions. Experimental measurements were taken in the MAST device using visual camera imaging, which allows the evaluation of the perpendicular size and velocity of the filaments, as well as the combination of density and temperature associated with the perturbation. A new algorithm based on the light emission integrated along the field lines associated with the position of the filament is developed to ensure that it is properly detected and tracked. The filaments are found to have velocities of the order of 1~\\text{km}~{{\\text{s}}-1} , a perpendicular diameter of around 2-3 cm and a density amplitude 2-3.5 times the background plasma. 3D and 2D numerical codes (the STORM module of BOUT++, GBS, HESEL and TOKAM3X) are used to reproduce the motion of the observed filaments with the purpose of validating the codes and of better understanding the experimental data. Good agreement is found between the 3D codes. The seeded filament simulations are also able to reproduce the dynamics observed in experiments with accuracy up to the experimental errorbar levels. In addition, the numerical results showed that filaments characterised by similar size and light emission intensity can have quite different dynamics if the pressure perturbation is distributed differently between density and temperature components. As an additional benefit, several observations on the dynamics of the filaments in the presence of evolving temperature fields were made and led to a better understanding of the behaviour of these coherent structures.
Recent improvements of reactor physics codes in MHI
NASA Astrophysics Data System (ADS)
Kosaka, Shinya; Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki
2015-12-01
This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO's Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.
NASA Technical Reports Server (NTRS)
Smith, Crawford F.; Podleski, Steve D.
1993-01-01
The proper use of a computational fluid dynamics code requires a good understanding of the particular code being applied. In this report the application of CFL3D, a thin-layer Navier-Stokes code, is compared with the results obtained from PARC3D, a full Navier-Stokes code. In order to gain an understanding of the use of this code, a simple problem was chosen in which several key features of the code could be exercised. The problem chosen is a cone in supersonic flow at an angle of attack. The issues of grid resolution, grid blocking, and multigridding with CFL3D are explored. The use of multigridding resulted in a significant reduction in the computational time required to solve the problem. Solutions obtained are compared with the results using the full Navier-Stokes equations solver PARC3D. The results obtained with the CFL3D code compared well with the PARC3D solutions.
Recent improvements of reactor physics codes in MHI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosaka, Shinya, E-mail: shinya-kosaka@mhi.co.jp; Yamaji, Kazuya; Kirimura, Kazuki
2015-12-31
This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO’s Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipatedmore » transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.« less
NASA Astrophysics Data System (ADS)
Grupp, F.
2004-06-01
We present a new opacity sampling model atmosphere code, named MAFAGS-OS. This code, designed for stars reaching from A0 down to G9 on a solar and metal poor main sequence and up to an evolutionary stage represented by the turnoff is introduced in its basic input physics and modelling techniques. Fe I bound-free cross-sections of \\citet{BAUTISTA97} are used and convection is treated according to \\citet{canuto91}. An αcm-parameter for the efficiency of convection of 0.82 is used as determined by \\citet{BERNKOPF98} from stellar evolution requirements. Within the process of opacity sampling, special attention is drawn to the matter of line selection. We show that a selection criterion, in which lines are chosen by their opacity weighted relative to the continuous background opacity, is useful and valid. The solar model calculated using this new code is shown to fit the measured solar flux distribution. It is also tested against the measured solar colours and leads to U-B=0.21 and B-V=0.64, in good agreement with observation. Comparison with measured centre-to-limb continuum data show only small improvement with respect to opacity-sampling type model atmospheres. This is the first of a series of 2 papers. Paper II will deal with the matter of temperature determination using Balmer lines and the infrared-flux method; furthermore it will present three ``standard'' stars analysed using this new model.
Energy deposition calculated by PHITS code in Pb spallation target
NASA Astrophysics Data System (ADS)
Yu, Quanzhi
2016-01-01
Energy deposition in a Pb spallation target irradiated by high energetic protons was calculated by PHITS2.52 code. The validation of the energy deposition and neutron production calculated by PHITS code was performed. Results show good agreements between the simulation results and the experimental data. Detailed comparison shows that for the total energy deposition, PHITS simulation result was about 15% overestimation than that of the experimental data. For the energy deposition along the length of the Pb target, the discrepancy mainly presented at the front part of the Pb target. Calculation indicates that most of the energy deposition comes from the ionizations of the primary protons and the produced secondary particles. With the event generator mode of PHITS, the deposit energy distribution for the particles and the light nulclei is presented for the first time. It indicates that the primary protons with energy more than 100 MeV are the most contributors to the total energy deposition. The energy depositions peaking at 10 MeV and 0.1 MeV, are mainly caused by the electrons, pions, d, t, 3He and also α particles during the cascade process and the evaporation process, respectively. The energy deposition density caused by different proton beam profiles are also calculated and compared. Such calculation and analyses are much helpful for better understanding the physical mechanism of energy deposition in the spallation target, and greatly useful for the thermal hydraulic design of the spallation target.
A Hybrid Task Graph Scheduler for High Performance Image Processing Workflows.
Blattner, Timothy; Keyrouz, Walid; Bhattacharyya, Shuvra S; Halem, Milton; Brady, Mary
2017-12-01
Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memory management, data motion, and processor occupancy. The Hybrid Task Graph Scheduler (HTGS) improves programmer productivity when implementing hybrid workflows for multi-core and multi-GPU systems. The Hybrid Task Graph Scheduler (HTGS) is an abstract execution model, framework, and API that increases programmer productivity when implementing hybrid workflows for such systems. HTGS manages dependencies between tasks, represents CPU and GPU memories independently, overlaps computations with disk I/O and memory transfers, keeps multiple GPUs occupied, and uses all available compute resources. Through these abstractions, data motion and memory are explicit; this makes data locality decisions more accessible. To demonstrate the HTGS application program interface (API), we present implementations of two example algorithms: (1) a matrix multiplication that shows how easily task graphs can be used; and (2) a hybrid implementation of microscopy image stitching that reduces code size by ≈ 43% compared to a manually coded hybrid workflow implementation and showcases the minimal overhead of task graphs in HTGS. Both of the HTGS-based implementations show good performance. In image stitching the HTGS implementation achieves similar performance to the hybrid workflow implementation. Matrix multiplication with HTGS achieves 1.3× and 1.8× speedup over the multi-threaded OpenBLAS library for 16k × 16k and 32k × 32k size matrices, respectively.
Post-Test Analysis of 11% Break at PSB-VVER Experimental Facility using Cathare 2 Code
NASA Astrophysics Data System (ADS)
Sabotinov, Luben; Chevrier, Patrick
The best estimate French thermal-hydraulic computer code CATHARE 2 Version 2.5_1 was used for post-test analysis of the experiment “11% upper plenum break”, conducted at the large-scale test facility PSB-VVER in Russia. The PSB rig is 1:300 scaled model of VVER-1000 NPP. A computer model has been developed for CATHARE 2 V2.5_1, taking into account all important components of the PSB facility: reactor model (lower plenum, core, bypass, upper plenum, downcomer), 4 separated loops, pressurizer, horizontal multitube steam generators, break section. The secondary side is represented by recirculation model. A large number of sensitivity calculations has been performed regarding break modeling, reactor pressure vessel modeling, counter current flow modeling, hydraulic losses, heat losses. The comparison between calculated and experimental results shows good prediction of the basic thermal-hydraulic phenomena and parameters such as pressures, temperatures, void fractions, loop seal clearance, etc. The experimental and calculation results are very sensitive regarding the fuel cladding temperature, which show a periodical nature. With the applied CATHARE 1D modeling, the global thermal-hydraulic parameters and the core heat up have been reasonably predicted.
Fusion product losses due to fishbone instabilities in deuterium JET plasmas
NASA Astrophysics Data System (ADS)
Kiptily, V. G.; Fitzgerald, M.; Goloborodko, V.; Sharapov, S. E.; Challis, C. D.; Frigione, D.; Graves, J.; Mantsinen, M. J.; Beaumont, P.; Garcia-Munoz, M.; Perez von Thun, C.; Rodriguez, J. F. R.; Darrow, D.; Keeling, D.; King, D.; McClements, K. G.; Solano, E. R.; Schmuck, S.; Sips, G.; Szepesi, G.; Contributors, JET
2018-01-01
During development of a high-performance hybrid scenario for future deuterium-tritium experiments on the Joint European Torus, an increased level of fast ion losses in the MeV energy range was observed during the instability of high-frequency n = 1 fishbones. The fishbones are excited during deuterium neutral beam injection combined with ion cyclotron heating. The frequency range of the fishbones, 10-25 kHz, indicates that they are driven by a resonant interaction with the NBI-produced deuterium beam ions in the energy range ⩽120 keV. The fast particle losses in a much higher energy range are measured with a fast ion loss detector, and the data show an expulsion of deuterium plasma fusion products, 1 MeV tritons and 3 MeV protons, during the fishbone bursts. An MHD mode analysis with the MISHKA code combined with the nonlinear wave-particle interaction code HAGIS shows that the loss of toroidal symmetry caused by the n = 1 fishbones affects strongly the confinement of non-resonant high energy fusion-born tritons and protons by perturbing their orbits and expelling them. This modelling is in a good agreement with the experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, J.; Alpan, F. A.; Fischer, G.A.
2011-07-01
Traditional two-dimensional (2D)/one-dimensional (1D) SYNTHESIS methodology has been widely used to calculate fast neutron (>1.0 MeV) fluence exposure to reactor pressure vessel in the belt-line region. However, it is expected that this methodology cannot provide accurate fast neutron fluence calculation at elevations far above or below the active core region. A three-dimensional (3D) parallel discrete ordinates calculation for ex-vessel neutron dosimetry on a Westinghouse 4-Loop XL Pressurized Water Reactor has been done. It shows good agreement between the calculated results and measured results. Furthermore, the results show very different fast neutron flux values at some of the former plate locationsmore » and elevations above and below an active core than those calculated by a 2D/1D SYNTHESIS method. This indicates that for certain irregular reactor internal structures, where the fast neutron flux has a very strong local effect, it is required to use a 3D transport method to calculate accurate fast neutron exposure. (authors)« less
Should Researchers Protect the Good Name and Reputation of Institutions in Which Research Is Done?
ERIC Educational Resources Information Center
Uys, L. R.
2008-01-01
The article explores the issue of protecting the good name and reputation of institutions and organisations in which research is being done. It explores current ethical codes in this regard, as well as legal provision for such protection. The issue of balancing the right of the researchers to share information about institutions in which research…
Reliability and throughput issues for optical wireless and RF wireless systems
NASA Astrophysics Data System (ADS)
Yu, Meng
The fast development of wireless communication technologies has two main trends. On one hand, in point-to-point communications, the demand for higher throughput called for the emergence of wireless broadband techniques including optical wireless (OW). One the other hand, wireless networks are becoming pervasive. New application of wireless networks ask for more flexible system infrastructures beyond the point-to-point prototype to achieve better performance. This dissertation investigates two topics on the reliability and throughput issues of new wireless technologies. The first topic is to study the capacity, and practical forward error control strategies for OW systems. We investigate the performance of OW systems under weak atmospheric turbulence. We first investigate the capacity and power allocation for multi-laser and multi-detector systems. Our results show that uniform power allocation is a practically optimal solution for paralleled channels. We also investigate the performance of Reed Solomon (RS) codes and turbo codes for OW systems. We present RS codes as good candidates for OW systems. The second topic targets user cooperation in wireless networks. We evaluate the relative merits of amplify-forward (AF) and decode-forward (DF) in practical scenarios. Both analysis and simulations show that the overall system performance is critically affected by the quality of the inter-user channel. Following this result, we investigate two schemes to improve the overall system performance. We first investigate the impact of the relay location on the overall system performance and determine the optimal location of relay. A best-selective single-relay 1 system is proposed and evaluated. Through the analysis of the average capacity and outage, we show that a small candidate pool of 3 to 5 relays suffices to reap most of the "geometric" gain available to a selective system. Second, we propose a new user cooperation scheme to provide an effective better inter-user channel. Most user cooperation protocols work in a time sharing manner, where a node forwards others' messages and sends its own message at different sections within a provisioned time slot. In the proposed scheme the two messages are encoded together in a single codework using network coding and transmitted in the given time slot. We also propose a general multiple-user cooperation framework. Under this framework, we show that network coding can achieve better diversity and provide effective better inter-user channels than time sharing. The last part of the dissertation focuses on multi-relay packet transmission. We propose an adaptive and distributive coding scheme for the relay nodes to adaptively cooperate and forward messages. The adaptive scheme shows performance gain over fixed schemes. Then we shift our viewpoint and represent the network as part of encoders and part of decoders.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1990-01-01
An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.
Effects of pulse width and coding on radar returns from clear air
NASA Technical Reports Server (NTRS)
Cornish, C. R.
1983-01-01
In atmospheric radar studies it is desired to obtain maximum information about the atmosphere and to use efficiently the radar transmitter and processing hardware. Large pulse widths are used to increase the signal to noise ratio since clear air returns are generally weak and maximum height coverage is desired. Yet since good height resolution is equally important, pulse compression techniques such as phase coding are employed to optimize the average power of the transmitter. Considerations in implementing a coding scheme and subsequent effects of an impinging pulse on the atmosphere are investigated.
ERIC Educational Resources Information Center
Ribot, Krystal M.; Hoff, Erika
2014-01-01
Relations between bilingual children's patterns of conversational code-switching (responding to one language with another), the balance of their dual language input, and their expressive and receptive proficiency in two languages were examined in 115 2½-year-old simultaneous Spanish-English bilinguals in the U.S. Children were more likely to…
Evolutionary Computation with Spatial Receding Horizon Control to Minimize Network Coding Resources
Leeson, Mark S.
2014-01-01
The minimization of network coding resources, such as coding nodes and links, is a challenging task, not only because it is a NP-hard problem, but also because the problem scale is huge; for example, networks in real world may have thousands or even millions of nodes and links. Genetic algorithms (GAs) have a good potential of resolving NP-hard problems like the network coding problem (NCP), but as a population-based algorithm, serious scalability and applicability problems are often confronted when GAs are applied to large- or huge-scale systems. Inspired by the temporal receding horizon control in control engineering, this paper proposes a novel spatial receding horizon control (SRHC) strategy as a network partitioning technology, and then designs an efficient GA to tackle the NCP. Traditional network partitioning methods can be viewed as a special case of the proposed SRHC, that is, one-step-wide SRHC, whilst the method in this paper is a generalized N-step-wide SRHC, which can make a better use of global information of network topologies. Besides the SRHC strategy, some useful designs are also reported in this paper. The advantages of the proposed SRHC and GA for the NCP are illustrated by extensive experiments, and they have a good potential of being extended to other large-scale complex problems. PMID:24883371
Neural network for image compression
NASA Astrophysics Data System (ADS)
Panchanathan, Sethuraman; Yeap, Tet H.; Pilache, B.
1992-09-01
In this paper, we propose a new scheme for image compression using neural networks. Image data compression deals with minimization of the amount of data required to represent an image while maintaining an acceptable quality. Several image compression techniques have been developed in recent years. We note that the coding performance of these techniques may be improved by employing adaptivity. Over the last few years neural network has emerged as an effective tool for solving a wide range of problems involving adaptivity and learning. A multilayer feed-forward neural network trained using the backward error propagation algorithm is used in many applications. However, this model is not suitable for image compression because of its poor coding performance. Recently, a self-organizing feature map (SOFM) algorithm has been proposed which yields a good coding performance. However, this algorithm requires a long training time because the network starts with random initial weights. In this paper we have used the backward error propagation algorithm (BEP) to quickly obtain the initial weights which are then used to speedup the training time required by the SOFM algorithm. The proposed approach (BEP-SOFM) combines the advantages of the two techniques and, hence, achieves a good coding performance in a shorter training time. Our simulation results demonstrate the potential gains using the proposed technique.
Chaotic Image Encryption of Regions of Interest
NASA Astrophysics Data System (ADS)
Xiao, Di; Fu, Qingqing; Xiang, Tao; Zhang, Yushu
Since different regions of an image have different importance, therefore only the important information of the image regions, which the users are really interested in, needs to be encrypted and protected emphatically in some special multimedia applications. However, the regions of interest (ROI) are always some irregular parts, such as the face and the eyes. Assuming the bulk data in transmission without being damaged, we propose a chaotic image encryption algorithm for ROI. ROI with irregular shapes are chosen and detected arbitrarily. Then the chaos-based image encryption algorithm with scrambling, S-box and diffusion parts is used to encrypt the ROI. Further, the whole image is compressed with Huffman coding. At last, a message authentication code (MAC) of the compressed image is generated based on chaotic maps. The simulation results show that the encryption algorithm has a good security level and can resist various attacks. Moreover, the compression method improves the storage and transmission efficiency to some extent, and the MAC ensures the integrity of the transmission data.
NASA Astrophysics Data System (ADS)
Islam, Muhammad Rabiul; Sakib-Ul-Alam, Md.; Nazat, Kazi Kaarima; Hassan, M. Munir
2017-12-01
FEA results greatly depend on analysis parameters. MSC NASTRAN nonlinear implicit analysis code has been used in large deformation finite element analysis of pitted marine SM490A steel rectangular plate. The effect of two types actual pit shape on parameters of integrity of structure has been analyzed. For 3-D modeling, a proposed method for simulation of pitted surface by probabilistic corrosion model has been used. The result has been verified with the empirical formula proposed by finite element analysis of steel surface generated with different pitted data where analyses have been carried out by the code of LS-DYNA 971. In the both solver, an elasto-plastic material has been used where an arbitrary stress versus strain curve can be defined. In the later one, the material model is based on the J2 flow theory with isotropic hardening where a radial return algorithm is used. The comparison shows good agreement between the two results which ensures successful simulation with comparatively less energy and time.
High-fidelity simulations of blast loadings in urban environments using an overset meshing strategy
NASA Astrophysics Data System (ADS)
Wang, X.; Remotigue, M.; Arnoldus, Q.; Janus, M.; Luke, E.; Thompson, D.; Weed, R.; Bessette, G.
2017-05-01
Detailed blast propagation and evolution through multiple structures representing an urban environment were simulated using the code Loci/BLAST, which employs an overset meshing strategy. The use of overset meshes simplifies mesh generation by allowing meshes for individual component geometries to be generated independently. Detailed blast propagation and evolution through multiple structures, wave reflection and interaction between structures, and blast loadings on structures were simulated and analyzed. Predicted results showed good agreement with experimental data generated by the US Army Engineer Research and Development Center. Loci/BLAST results were also found to compare favorably to simulations obtained using the Second-Order Hydrodynamic Automatic Mesh Refinement Code (SHAMRC). The results obtained demonstrated that blast reflections in an urban setting significantly increased the blast loads on adjacent buildings. Correlations of computational results with experimental data yielded valuable insights into the physics of blast propagation, reflection, and interaction under an urban setting and verified the use of Loci/BLAST as a viable tool for urban blast analysis.
Automatic hot wire GTA welding of pipe offers speed and increased deposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sykes, I.; Digiacomo, J.
1995-07-01
Heavy-wall pipe welding for the power and petrochemical industry must meet code requirements. Contractors strive to meet these requirements in the most productive way possible. The challenge put to orbital welding equipment manufacturers is to produce pipe welding equipment that cost-effectively produces code-quality welds. Orbital welding equipment using the GTA process has long produced outstanding quality results but has lacked the deposition rate to compete cost effectively with other manual and semiautomatic processes such as SMAW, FCAW and GMAW. In recent years, significant progress has been made with the use of narrow-groove weld joint designs to reduce weld joint volumemore » and improve welding times. Astro Arc Polysoude, an orbital welding equipment manufacturer based in Sun Valley, Calif., and Nantes, France, has combined the hot wire GTAW process with orbital welding equipment using a narrow-groove weld joint design. Field test results show this process and procedure is a good alternative for many heavy-wall-pipe welding applications.« less
Thermal Timescale Mass Transfer In Binary Population Synthesis
NASA Astrophysics Data System (ADS)
Justham, S.; Kolb, U.
2004-07-01
Studies of binary evolution have, until recently, neglected thermal timescale mass transfer (TTMT). Recent work has suggested that this previously poorly studied area is crucial in the understanding of systems across the compact binary spectrum. We use the state-of-the-art binary population synthesis code BiSEPS (Willems and Kolb, 2002, MNRAS 337 1004-1016). However, the present treatment of TTMT is incomplete due to the nonlinear behaviour of stars in their departure from gravothermal `equilibrium'. Here we show work that should update the ultrafast stellar evolution algorithms within BiSEPS to make it the first pseudo-analytic code that can follow TTMT properly. We have generated fits to a set of over 300 Case B TTMT sequences with a range of intermediate-mass donors. These fits produce very good first approximations to both HR diagrams and mass-transfer rates (see figures 1 and 2), which we later hope to improve and extend. They are already a significant improvement over the previous fits.
Compressed domain indexing of losslessly compressed images
NASA Astrophysics Data System (ADS)
Schaefer, Gerald
2001-12-01
Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, L.; Cluggish, B.; Kim, J. S.
2010-02-15
A Monte Carlo charge breeding code (MCBC) is being developed by FAR-TECH, Inc. to model the capture and charge breeding of 1+ ion beam in an electron cyclotron resonance ion source (ECRIS) device. The ECRIS plasma is simulated using the generalized ECRIS model which has two choices of boundary settings, free boundary condition and Bohm condition. The charge state distribution of the extracted beam ions is calculated by solving the steady state ion continuity equations where the profiles of the captured ions are used as source terms. MCBC simulations of the charge breeding of Rb+ showed good agreement with recentmore » charge breeding experiments at Argonne National Laboratory (ANL). MCBC correctly predicted the peak of highly charged ion state outputs under free boundary condition and similar charge state distribution width but a lower peak charge state under the Bohm condition. The comparisons between the simulation results and ANL experimental measurements are presented and discussed.« less
NASA Astrophysics Data System (ADS)
Rebelo, Marina de Sá; Aarre, Ann Kirstine Hummelgaard; Clemmesen, Karen-Louise; Brandão, Simone Cristina Soares; Giorgi, Maria Clementina; Meneghetti, José Cláudio; Gutierrez, Marco Antonio
2009-12-01
A method to compute three-dimension (3D) left ventricle (LV) motion and its color coded visualization scheme for the qualitative analysis in SPECT images is proposed. It is used to investigate some aspects of Cardiac Resynchronization Therapy (CRT). The method was applied to 3D gated-SPECT images sets from normal subjects and patients with severe Idiopathic Heart Failure, before and after CRT. Color coded visualization maps representing the LV regional motion showed significant difference between patients and normal subjects. Moreover, they indicated a difference between the two groups. Numerical results of regional mean values representing the intensity and direction of movement in radial direction are presented. A difference of one order of magnitude in the intensity of the movement on patients in relation to the normal subjects was observed. Quantitative and qualitative parameters gave good indications of potential application of the technique to diagnosis and follow up of patients submitted to CRT.
Study of unsteady flow field over a forward-looking endoatmospheric hit-to-kill interceptor
NASA Technical Reports Server (NTRS)
Yang, H. Q.; Antonison, Mark
1993-01-01
Forward-looking recessed aperture interceptor has significant aero-optical and aero-thermal advantages. Previous experimental studies have shown that the flow field in front of a forward-looking cavity is unsteady and the bow shock oscillates at the cavity fundamental resonant frequency. In this study, an advanced CFD code is applied to study the above unsteady phenomena. The code is first validated against the experiments and good comparisons are found. The numerical parametric study shows that the existence of oscillatory bow shock is very sensitive to the cavity geometry. At a FOV of 60 deg, the initial transient quickly dampens out to a steady state. With a decrease of FOV, an unsteady oscillatory flow field is sustained after initial transient and the amplitude of oscillation is a function of FOV. For FOV of 20 deg, the amplitude of pressure oscillation is 25 percent of the mean value in the cavity. For a FOV of 10 deg, it can be as high as 50 percent.
Evaluation of icing drag coefficient correlations applied to iced propeller performance prediction
NASA Technical Reports Server (NTRS)
Miller, Thomas L.; Shaw, R. J.; Korkan, K. D.
1987-01-01
Evaluation of three empirical icing drag coefficient correlations is accomplished through application to a set of propeller icing data. The various correlations represent the best means currently available for relating drag rise to various flight and atmospheric conditions for both fixed-wing and rotating airfoils, and the work presented here ilustrates and evaluates one such application of the latter case. The origins of each of the correlations are discussed, and their apparent capabilities and limitations are summarized. These correlations have been made to be an integral part of a computer code, ICEPERF, which has been designed to calculate iced propeller performance. Comparison with experimental propeller icing data shows generally good agreement, with the quality of the predicted results seen to be directly related to the radial icing extent of each case. The code's capability to properly predict thrust coefficient, power coefficient, and propeller efficiency is shown to be strongly dependent on the choice of correlation selected, as well as upon proper specificatioon of radial icing extent.
Verification of ARES transport code system with TAKEDA benchmarks
NASA Astrophysics Data System (ADS)
Zhang, Liang; Zhang, Bin; Zhang, Penghe; Chen, Mengteng; Zhao, Jingchang; Zhang, Shun; Chen, Yixue
2015-10-01
Neutron transport modeling and simulation are central to many areas of nuclear technology, including reactor core analysis, radiation shielding and radiation detection. In this paper the series of TAKEDA benchmarks are modeled to verify the critical calculation capability of ARES, a discrete ordinates neutral particle transport code system. SALOME platform is coupled with ARES to provide geometry modeling and mesh generation function. The Koch-Baker-Alcouffe parallel sweep algorithm is applied to accelerate the traditional transport calculation process. The results show that the eigenvalues calculated by ARES are in excellent agreement with the reference values presented in NEACRP-L-330, with a difference less than 30 pcm except for the first case of model 3. Additionally, ARES provides accurate fluxes distribution compared to reference values, with a deviation less than 2% for region-averaged fluxes in all cases. All of these confirms the feasibility of ARES-SALOME coupling and demonstrate that ARES has a good performance in critical calculation.
Baseline acoustic levels of the NASA Active Noise Control Fan rig
NASA Technical Reports Server (NTRS)
Sutliff, Daniel L.; Heidelberg, Laurence J.; Elliott, David M.; Nallasamy, M.
1996-01-01
Extensive measurements of the spinning acoustic mode structure in the NASA 48 inch Active Noise Control Fan (ANCF) test rig have been taken. A continuously rotating microphone rake system with a least-squares data reduction technique was employed to measure these modes in the inlet and exhaust. Farfield directivity patterns in an anechoic environment were also measured at matched corrected rotor speeds. Several vane counts and spacings were tested over a range of rotor speeds. The Eversman finite element radiation code was run with the measured in-duct modes as input and the computed farfield results were compared to the experimentally measured directivity pattern. The experimental data show that inlet spinning mode measurements can be made very accurately. Exhaust mode measurements may have wake interference, but the least-squares reduction does a good job of rejecting the non-acoustic pressure. The Eversman radiation code accurately extrapolates the farfield levels and directivity pattern when all in-duct modes are included.
NASA Technical Reports Server (NTRS)
Miki, Kenji; Moder, Jeff; Liou, Meng-Sing
2016-01-01
In this paper, we present the recent enhancement of the Open National Combustion Code (OpenNCC) and apply the OpenNCC to model a realistic combustor configuration (Energy Efficient Engine (E3)). First, we perform a series of validation tests for the newly-implemented advection upstream splitting method (AUSM) and the extended version of the AUSM-family schemes (AUSM+-up). Compared with the analytical/experimental data of the validation tests, we achieved good agreement. In the steady-state E3 cold flow results using the Reynolds-averaged Navier-Stokes(RANS), we find a noticeable difference in the flow fields calculated by the two different numerical schemes, the standard Jameson- Schmidt-Turkel (JST) scheme and the AUSM scheme. The main differences are that the AUSM scheme is less numerical dissipative and it predicts much stronger reverse flow in the recirculation zone. This study indicates that two schemes could show different flame-holding predictions and overall flame structures.
A verification of the gyrokinetic microstability codes GEM, GYRO, and GS2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bravenec, R. V.; Chen, Y.; Wan, W.
2013-10-15
A previous publication [R. V. Bravenec et al., Phys. Plasmas 18, 122505 (2011)] presented favorable comparisons of linear frequencies and nonlinear fluxes from the Eulerian gyrokinetic codes gyro[J. Candy and R. E. Waltz, J. Comput. Phys. 186, 545 (2003)] and gs2[W. Dorland et al., Phys. Rev. Lett. 85, 5579 (2000)]. The motivation was to verify the codes, i.e., demonstrate that they correctly solve the gyrokinetic-Maxwell equations. The premise was that it is highly unlikely for both codes to yield the same incorrect results. In this work, we add the Lagrangian particle-in-cell code gem[Y. Chen and S. Parker, J. Comput. Phys.more » 220, 839 (2007)] to the comparisons, not simply to add another code, but also to demonstrate that the codes' algorithms do not matter. We find good agreement of gem with gyro and gs2 for the plasma conditions considered earlier, thus establishing confidence that the codes are verified and that ongoing validation efforts for these plasma parameters are warranted.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talley, Darren G.
2017-04-01
This report describes the work and results of the verification and validation (V&V) of the version 1.0 release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, the equation of motion for fuel element thermal expansion, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This V&V effort was intended to confirm that the code showsmore » good agreement between simulation and actual ACRR operations.« less
Experimental Validation of an Ion Beam Optics Code with a Visualized Ion Thruster
NASA Astrophysics Data System (ADS)
Nakayama, Yoshinori; Nakano, Masakatsu
For validation of an ion beam optics code, the behavior of ion beam optics was experimentally observed and evaluated with a two-dimensional visualized ion thruster (VIT). Since the observed beam focus positions, sheath positions and measured ion beam currents were in good agreement with the numerical results, it was confirmed that the numerical model of this code was appropriated. In addition, it was also confirmed that the beam focus position was moved on center axis of grid hole according to the applied grid potentials, which differs from conventional understanding/assumption. The VIT operations may be useful not only for the validation of ion beam optics codes but also for the fundamental and intuitive understanding of the Child Law Sheath theory.
The integration of laser communication and ranging
NASA Astrophysics Data System (ADS)
Xu, Mengmeng; Sun, Jianfeng; Zhou, Yu; Zhang, Bo; Zhang, Guo; Li, Guangyuan; He, Hongyu; Lao, Chenzhe
2017-08-01
The method to realize the integration of laser communication and ranging is proposed in this paper. In the transmitter of two places, the ranging codes with uniqueness, good autocorrelation and cross-correlation properties are embed in the communication data and the encoded with the communication data to realize serial communication. And then the encoded data are modulated and send to each other, which can realize high speed two one-way laser communication. At the receiver, we can get the received ranging code after the demodulation, decoding and clock recovery. The received ranging codes and the local ranging codes do the autocorrelation to get a roughly range, while the phase difference between the local clock and the recovery clock to achieve the precision of the distance.
Characterisation of an anthropomorphic chest phantom for dose measurements in radiology beams
NASA Astrophysics Data System (ADS)
Henriques, L. M. S.; Cerqueira, R. A. D.; Santos, W. S.; Pereira, A. J. S.; Rodrigues, T. M. A.; Carvalho Júnior, A. B.; Maia, A. F.
2014-02-01
The objective of this study was to characterise an anthropomorphic chest phantom for dosimetric measurements of conventional radiology beams. This phantom was developed by a previous research project at the Federal University of Sergipe for image quality control tests. As the phantom consists of tissue-equivalent material, it is possible to characterise it for dosimetric studies. For comparison, a geometric chest phantom, consisting of PMMA (polymethylmethacrylate) with dimensions of 30×30×15 cm³ was used. Measurements of incident air kerma (Ki) and entrance surface dose (ESD) were performed using ionisation chambers. From the results, backscatter factors (BSFs) of the two phantoms were determined and compared with values estimated by CALDose_X software, based on a Monte Carlo simulation. For the technical parameters evaluated in this study, the ESD and BSF values obtained experimentally showed a good similarity between the two phantoms, with minimum and maximum difference of 0.2% and 7.0%, respectively, and showed good agreement with the results published in the literature. Organ doses and effective doses for the anthropomorphic phantom were also estimated by the determination of conversion coefficients (CCs) using the visual Monte Carlo (VMC) code. Therefore, the results of this study prove that the anthropomorphic thorax phantom proposed is a good tool to use in dosimetry and can be used for risk evaluation of X-ray diagnostic procedures.
Analysis of the hydrological safety of dams combining two numerical tools: Iber and DualSPHysics
NASA Astrophysics Data System (ADS)
González-Cao, J.; García-Feal, O.; Domínguez, J. M.; Crespo, A. J. C.; Gómez-Gesteira, M.
2018-02-01
The upgrade of the hydrological safety of dams is a critical issue to avoid failures that can dramatically affect people and assets. This paper shows a numerical methodology to analyse the safety of the Belesar dam (NW, Spain) based on two different numerical codes. First, a mesh-based code named Iber, suited to deal with large 2-D domains, is used to simulate the impoundment. The initial conditions and the inlet provided to Iber correspond to the maximum water elevation and the maximum expected inflow to the impoundment defined in the technical specifications of the dam, which are associated to the more hazardous operation conditions of the dam. Iber provides information about the time needed for water to attain the crest of the dam when floodgates are closed. In addition, it also provides the velocity of discharge when gates are opened. Then, a mesh-free code named DualSPHysics, which is especially suited to deal with complex and violent 3-D flows, is used to reproduce the behaviour of one of the spillways of the dam starting from the results obtained with Iber, which are used as inlet conditions for DualSPHysics. The combined results of both model show that the left spillway can discharge the surplus of water associated to the maximum inflow to the reservoir if the gates of the spillways are opened before the overtopping of the dam was observed. In addition, water depth measured on the spillway is considerably lower than the lateral walls, preventing overtopping. Finally, velocities at different points of the spillway showed to be in good agreement with theoretical values.
GRADSPMHD: A parallel MHD code based on the SPH formalism
NASA Astrophysics Data System (ADS)
Vanaverbeke, S.; Keppens, R.; Poedts, S.
2014-03-01
We present GRADSPMHD, a completely Lagrangian parallel magnetohydrodynamics code based on the SPH formalism. The implementation of the equations of SPMHD in the “GRAD-h” formalism assembles known results, including the derivation of the discretized MHD equations from a variational principle, the inclusion of time-dependent artificial viscosity, resistivity and conductivity terms, as well as the inclusion of a mixed hyperbolic/parabolic correction scheme for satisfying the ∇ṡB→ constraint on the magnetic field. The code uses a tree-based formalism for neighbor finding and can optionally use the tree code for computing the self-gravity of the plasma. The structure of the code closely follows the framework of our parallel GRADSPH FORTRAN 90 code which we added previously to the CPC program library. We demonstrate the capabilities of GRADSPMHD by running 1, 2, and 3 dimensional standard benchmark tests and we find good agreement with previous work done by other researchers. The code is also applied to the problem of simulating the magnetorotational instability in 2.5D shearing box tests as well as in global simulations of magnetized accretion disks. We find good agreement with available results on this subject in the literature. Finally, we discuss the performance of the code on a parallel supercomputer with distributed memory architecture. Catalogue identifier: AERP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERP_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 620503 No. of bytes in distributed program, including test data, etc.: 19837671 Distribution format: tar.gz Programming language: FORTRAN 90/MPI. Computer: HPC cluster. Operating system: Unix. Has the code been vectorized or parallelized?: Yes, parallelized using MPI. RAM: ˜30 MB for a Sedov test including 15625 particles on a single CPU. Classification: 12. Nature of problem: Evolution of a plasma in the ideal MHD approximation. Solution method: The equations of magnetohydrodynamics are solved using the SPH method. Running time: The test provided takes approximately 20 min using 4 processors.
Brenes-Camacho, Gilbert
2013-01-01
The article's main goal is to study the relationship between subjective perception of own economic situation and objective measures of economic well-being -sources of income, home ownership, education level, and informal family transfers- among the elderly in two Latin American countries: Mexico and Costa Rica. The data come from two surveys about ageing: CRELES in Costa Rica and MHAS in Mexico. The most important dependent variables is derived from the answer to the question "How would you rate your current economic situation? in Costa Rica, and "Would you say that your current economic situation is…?" in Mexico. For both surveys, the answers were coded as a binary variable; code 0 represents the Excellent, Very Good, and Good categories, while the code 1 represents the Fair or Bad categories. The analysis finds that retirement pension income is an important factor for defining self-rated economic situation in both countries. In Costa Rica, spouse's income and home ownership are relevant predictors for the perception of well-being, while in Mexico, receiving transfer income is associated with this perception.
Pressure of the hot gas in simulations of galaxy clusters
NASA Astrophysics Data System (ADS)
Planelles, S.; Fabjan, D.; Borgani, S.; Murante, G.; Rasia, E.; Biffi, V.; Truong, N.; Ragone-Figueroa, C.; Granato, G. L.; Dolag, K.; Pierpaoli, E.; Beck, A. M.; Steinborn, Lisa K.; Gaspari, M.
2017-06-01
We analyse the radial pressure profiles, the intracluster medium (ICM) clumping factor and the Sunyaev-Zel'dovich (SZ) scaling relations of a sample of simulated galaxy clusters and groups identified in a set of hydrodynamical simulations based on an updated version of the treepm-SPH GADGET-3 code. Three different sets of simulations are performed: the first assumes non-radiative physics, the others include, among other processes, active galactic nucleus (AGN) and/or stellar feedback. Our results are analysed as a function of redshift, ICM physics, cluster mass and cluster cool-coreness or dynamical state. In general, the mean pressure profiles obtained for our sample of groups and clusters show a good agreement with X-ray and SZ observations. Simulated cool-core (CC) and non-cool-core (NCC) clusters also show a good match with real data. We obtain in all cases a small (if any) redshift evolution of the pressure profiles of massive clusters, at least back to z = 1. We find that the clumpiness of gas density and pressure increases with the distance from the cluster centre and with the dynamical activity. The inclusion of AGN feedback in our simulations generates values for the gas clumping (√{C}_{ρ }˜ 1.2 at R200) in good agreement with recent observational estimates. The simulated YSZ-M scaling relations are in good accordance with several observed samples, especially for massive clusters. As for the scatter of these relations, we obtain a clear dependence on the cluster dynamical state, whereas this distinction is not so evident when looking at the subsamples of CC and NCC clusters.
Yancey, Antronette K; Cole, Brian L; Brown, Rochelle; Williams, Jerome D; Hillier, Amy; Kline, Randolph S; Ashe, Marice; Grier, Sonya A; Backman, Desiree; McCarthy, William J
2009-03-01
Commercial marketing is a critical but understudied element of the sociocultural environment influencing Americans' food and beverage preferences and purchases. This marketing also likely influences the utilization of goods and services related to physical activity and sedentary behavior. A growing literature documents the targeting of racial/ethnic and income groups in commercial advertisements in magazines, on billboards, and on television that may contribute to sociodemographic disparities in obesity and chronic disease risk and protective behaviors. This article examines whether African Americans, Latinos, and people living in low-income neighborhoods are disproportionately exposed to advertisements for high-calorie, low nutrient-dense foods and beverages and for sedentary entertainment and transportation and are relatively underexposed to advertising for nutritious foods and beverages and goods and services promoting physical activities. Outdoor advertising density and content were compared in zip code areas selected to offer contrasts by area income and ethnicity in four cities: Los Angeles, Austin, New York City, and Philadelphia. Large variations were observed in the amount, type, and value of advertising in the selected zip code areas. Living in an upper-income neighborhood, regardless of its residents' predominant ethnicity, is generally protective against exposure to most types of obesity-promoting outdoor advertising (food, fast food, sugary beverages, sedentary entertainment, and transportation). The density of advertising varied by zip code area race/ethnicity, with African American zip code areas having the highest advertising densities, Latino zip code areas having slightly lower densities, and white zip code areas having the lowest densities. The potential health and economic implications of differential exposure to obesity-related advertising are substantial. Although substantive legal questions remain about the government's ability to regulate advertising, the success of limiting tobacco advertising offers lessons for reducing the marketing contribution to the obesigenicity of urban environments.
Clarke, John R; Ragone, Andrew V; Greenwald, Lloyd
2005-09-01
We conducted a comparison of methods for predicting survival using survival risk ratios (SRRs), including new comparisons based on International Classification of Diseases, Ninth Revision (ICD-9) versus Abbreviated Injury Scale (AIS) six-digit codes. From the Pennsylvania trauma center's registry, all direct trauma admissions were collected through June 22, 1999. Patients with no comorbid medical diagnoses and both ICD-9 and AIS injury codes were used for comparisons based on a single set of data. SRRs for ICD-9 and then for AIS diagnostic codes were each calculated two ways: from the survival rate of patients with each diagnosis and when each diagnosis was an isolated diagnosis. Probabilities of survival for the cohort were calculated using each set of SRRs by the multiplicative ICISS method and, where appropriate, the minimum SRR method. These prediction sets were then internally validated against actual survival by the Hosmer-Lemeshow goodness-of-fit statistic. The 41,364 patients had 1,224 different ICD-9 injury diagnoses in 32,261 combinations and 1,263 corresponding AIS injury diagnoses in 31,755 combinations, ranging from 1 to 27 injuries per patient. All conventional ICD-9-based combinations of SRRs and methods had better Hosmer-Lemeshow goodness-of-fit statistic fits than their AIS-based counterparts. The minimum SRR method produced better calibration than the multiplicative methods, presumably because it did not magnify inaccuracies in the SRRs that might occur with multiplication. Predictions of survival based on anatomic injury alone can be performed using ICD-9 codes, with no advantage from extra coding of AIS diagnoses. Predictions based on the single worst SRR were closer to actual outcomes than those based on multiplying SRRs.
Projectile and Lab Frame Differential Cross Sections for Electromagnetic Dissociation
NASA Technical Reports Server (NTRS)
Norbury, John W.; Adamczyk, Anne; Dick, Frank
2008-01-01
Differential cross sections for electromagnetic dissociation in nuclear collisions are calculated for the first time. In order to be useful for three - dimensional transport codes, these cross sections have been calculated in both the projectile and lab frames. The formulas for these cross sections are such that they can be immediately used in space radiation transport codes. Only a limited amount of data exists, but the comparison between theory and experiment is good.
Effects of the Atmosphere on the Propagation of 10.6-micro Laser Beams.
Hayes, J N; Ulrich, P B; Aitken, A H
1972-02-01
This paper gives an overview of the use of a wave optics computer code to model the propagation of high power CO(2) laser beams in the atmosphere. The nonlinear effects of atmospheric heating and kinetic cooling phenomena are included in the analysis. Only steady-state, nonturbulent cases are studied. Thermal conduction and free convection are assumed negligible compared to other effects included in the calculation. Results showing the important effect of water vapor concentration on beam quality are given. Beam slewing is also studied. Comparison is made with geometrical optics results, and good agreement is found with laboratory experiments that simulate atmospheric propagation.
NASA Astrophysics Data System (ADS)
Duan, B.; Bari, M. A.; Wu, Z. Q.; Jun, Y.; Li, Y. M.; Wang, J. G.
2012-11-01
Aims: We present relativistic quantum mechanical calculations of electron-impact broadening of the singlet and triplet transition 2s3s ← 2s3p in four Be-like ions from N IV to Ne VII. Methods: In our theoretical calculations, the K-matrix and related symmetry information determined by the colliding systems are generated by the DARC codes. Results: A careful comparison between our calculations and experimental results shows good agreement. Our calculated widths of spectral lines also agree with earlier theoretical results. Our investigations provide new methods of calculating electron-impact broadening parameters for plasma diagnostics.
The Growth of Multi-Site Fatigue Damage in Fuselage Lap Joints
NASA Technical Reports Server (NTRS)
Piascik, Robert S.; Willard, Scott A.
1999-01-01
Destructive examinations were performed to document the progression of multi-site damage (MSD) in three lap joint panels that were removed from a full scale fuselage test article that was tested to 60,000 full pressurization cycles. Similar fatigue crack growth characteristics were observed for small cracks (50 microns to 10 mm) emanating from counter bore rivets, straight shank rivets, and 100 deg counter sink rivets. Good correlation of the fatigue crack growth data base obtained in this study and FASTRAN Code predictions show that the growth of MSD in the fuselage lap joint structure can be predicted by fracture mechanics based methods.
Experimental operation of a sodium heat pipe
NASA Astrophysics Data System (ADS)
Holtz, R. E.; McLennan, G. A.; Koehl, E. R.
1985-05-01
This report documents the operation of a 28 in. long sodium heat pipe in the Heat Pipe Test Facility (HPTF) installed at Argonne National Laboratory. Experimental data were collected to simulate conditions prototypic of both a fluidized bed coal combustor application and a space environment application. Both sets of experiment data show good agreement with the heat pipe analytical model. The heat transfer performance of the heat pipe proved reliable over a substantial period of operation and over much thermal cycling. Additional testing of longer heat pipes under controlled laboratory conditions will be necessary to determine performance limitations and to complete the design code validation.
Benchmark studies of induced radioactivity produced in LHC materials, Part I: Specific activities.
Brugger, M; Khater, H; Mayer, S; Prinz, A; Roesler, S; Ulrici, L; Vincke, H
2005-01-01
Samples of materials which will be used in the LHC machine for shielding and construction components were irradiated in the stray radiation field of the CERN-EU high-energy reference field facility. After irradiation, the specific activities induced in the various samples were analysed with a high-precision gamma spectrometer at various cooling times, allowing identification of isotopes with a wide range of half-lives. Furthermore, the irradiation experiment was simulated in detail with the FLUKA Monte Carlo code. A comparison of measured and calculated specific activities shows good agreement, supporting the use of FLUKA for estimating the level of induced activity in the LHC.
Numerical modeling of pollutant transport using a Lagrangian marker particle technique
NASA Technical Reports Server (NTRS)
Spaulding, M.
1976-01-01
A derivation and code were developed for the three-dimensional mass transport equation, using a particle-in-cell solution technique, to solve coastal zone waste discharge problems where particles are a major component of the waste. Improvements in the particle movement techniques are suggested and typical examples illustrated. Preliminary model comparisons with analytic solutions for an instantaneous point release in a uniform flow show good results in resolving the waste motion. The findings to date indicate that this computational model will provide a useful technique to study the motion of sediment, dredged spoils, and other particulate waste commonly deposited in coastal waters.
Time-Dependent Erosion of Ion Optics
NASA Technical Reports Server (NTRS)
Wirz, Richard E.; Anderson, John R.; Katz, Ira; Goebel, Dan M.
2008-01-01
The accurate prediction of thruster life requires time-dependent erosion estimates for the ion optics assembly. Such information is critical to end-of-life mechanisms such as electron backstreaming. CEX2D was recently modified to handle time-dependent erosion, double ions, and multiple throttle conditions in a single run. The modified code is called "CEX2D-t". Comparisons of CEX2D-t results with LDT and ELT post-tests results show good agreement for both screen and accel grid erosion including important erosion features such as chamfering of the downstream end of the accel grid and reduced rate of accel grid aperture enlargement with time.
Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach
Danyali, Habibiollah; Mertins, Alfred
2011-01-01
In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653
NASA Technical Reports Server (NTRS)
Nallasamy, M.; Clark, B. J.; Groeneweg, J. F.
1987-01-01
The acoustics of an advanced single rotation SR-3 propeller at cruise conditions are studied employing a time-domain approach. The study evaluates the acoustic significance of the differences in blade pressures computed using nonreflecting rather than hard wall boundary conditions in the three-dimensional Euler code solution. The directivities of the harmonics of the blade passing frequency tone and the effects of chordwise loading on tone directivity are examined. The results show that the maximum difference in the computed sound pressure levels due to the use of blade pressure distributions obtained with the nonreflecting rather than the hard wall boundary conditions is about 1.5 dB. The blade passing frequency tone directivity obtained in the present study shows good agreement with jetstar flight data.
NASA Technical Reports Server (NTRS)
Nallasamy, M.; Clark, B. J.; Groeneweg, J. F.
1987-01-01
The acoustics of an advanced single rotation SR-3 propeller at cruise conditions are studied employing a time-domain approach. The study evaluates the acoustic significance of the differences in blade pressures computed using nonreflecting rather than hard wall boundary conditions in the three-dimensional Euler code solution. The directivities of the harmonics of the blade passing frequency tone and the effects of chordwise loading on tone directivity are examined. The results show that the maximum difference in the computed sound pressure levels due to the use of blade pressure distributions obtained with the nonreflecting rather than the hard wall boundary conditions is about 1.5 dB. The blade passing frequency tone directivity obtained in the present study shows good agreement with jetstar flight data.
Linearized Aeroelastic Solver Applied to the Flutter Prediction of Real Configurations
NASA Technical Reports Server (NTRS)
Reddy, Tondapu S.; Bakhle, Milind A.
2004-01-01
A fast-running unsteady aerodynamics code, LINFLUX, was previously developed for predicting turbomachinery flutter. This linearized code, based on a frequency domain method, models the effects of steady blade loading through a nonlinear steady flow field. The LINFLUX code, which is 6 to 7 times faster than the corresponding nonlinear time domain code, is suitable for use in the initial design phase. Earlier, this code was verified through application to a research fan, and it was shown that the predictions of work per cycle and flutter compared well with those from a nonlinear time-marching aeroelastic code, TURBO-AE. Now, the LINFLUX code has been applied to real configurations: fans developed under the Energy Efficient Engine (E-cubed) Program and the Quiet Aircraft Technology (QAT) project. The LINFLUX code starts with a steady nonlinear aerodynamic flow field and solves the unsteady linearized Euler equations to calculate the unsteady aerodynamic forces on the turbomachinery blades. First, a steady aerodynamic solution is computed for given operating conditions using the nonlinear unsteady aerodynamic code TURBO-AE. A blade vibration analysis is done to determine the frequencies and mode shapes of the vibrating blades, and an interface code is used to convert the steady aerodynamic solution to a form required by LINFLUX. A preprocessor is used to interpolate the mode shapes from the structural dynamics mesh onto the computational fluid dynamics mesh. Then, LINFLUX is used to calculate the unsteady aerodynamic pressure distribution for a given vibration mode, frequency, and interblade phase angle. Finally, a post-processor uses the unsteady pressures to calculate the generalized aerodynamic forces, eigenvalues, an esponse amplitudes. The eigenvalues determine the flutter frequency and damping. Results of flutter calculations from the LINFLUX code are presented for (1) the E-cubed fan developed under the E-cubed program and (2) the Quiet High Speed Fan (QHSF) developed under the Quiet Aircraft Technology project. The results are compared with those obtained from the TURBO-AE code. A graph of the work done per vibration cycle for the first vibration mode of the E-cubed fan is shown. It can be seen that the LINFLUX results show a very good comparison with TURBO-AE results over the entire range of interblade phase angle. The work done per vibration cycle for the first vibration mode of the QHSF fan is shown. Once again, the LINFLUX results compare very well with the results from the TURBOAE code.
Comparison Between Simulated and Experimentally Measured Performance of a Four Port Wave Rotor
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Wilson, Jack; Welch, Gerard E.
2007-01-01
Performance and operability testing has been completed on a laboratory-scale, four-port wave rotor, of the type suitable for use as a topping cycle on a gas turbine engine. Many design aspects, and performance estimates for the wave rotor were determined using a time-accurate, one-dimensional, computational fluid dynamics-based simulation code developed specifically for wave rotors. The code follows a single rotor passage as it moves past the various ports, which in this reference frame become boundary conditions. This paper compares wave rotor performance predicted with the code to that measured during laboratory testing. Both on and off-design operating conditions were examined. Overall, the match between code and rig was found to be quite good. At operating points where there were disparities, the assumption of larger than expected internal leakage rates successfully realigned code predictions and laboratory measurements. Possible mechanisms for such leakage rates are discussed.
A Very Low Cost BCH Decoder for High Immunity of On-Chip Memories
NASA Astrophysics Data System (ADS)
Seo, Haejun; Han, Sehwan; Heo, Yoonseok; Cho, Taewon
BCH(Bose-Chaudhuri-Hoquenbhem) code, a type of block codes-cyclic codes, has very strong error-correcting ability which is vital for performing the error protection on the memory system. BCH code has many kinds of dual algorithms, PGZ(Pererson-Gorenstein-Zierler) algorithm out of them is advantageous in view of correcting the errors through the simple calculation in t value. However, this is problematic when this becomes 0 (divided by zero) in case ν ≠ t. In this paper, the circuit would be simplified by suggesting the multi-mode hardware architecture in preparation that v were 0~3. First, production cost would be less thanks to the smaller number of gates. Second, lessening power consumption could lengthen the recharging period. The very low cost and simple datapath make our design a good choice in small-footprint SoC(System on Chip) as ECC(Error Correction Code/Circuit) in memory system.
NASA Technical Reports Server (NTRS)
Rosen, Bruce S.
1991-01-01
An upwind three-dimensional volume Navier-Stokes code is modified to facilitate modeling of complex geometries and flow fields represented by proposed National Aerospace Plane concepts. Code enhancements include an equilibrium air model, a generalized equilibrium gas model and several schemes to simplify treatment of complex geometric configurations. The code is also restructured for inclusion of an arbitrary number of independent and dependent variables. This latter capability is intended for eventual use to incorporate nonequilibrium/chemistry gas models, more sophisticated turbulence and transition models, or other physical phenomena which will require inclusion of additional variables and/or governing equations. Comparisons of computed results with experimental data and results obtained using other methods are presented for code validation purposes. Good correlation is obtained for all of the test cases considered, indicating the success of the current effort.
NASA Astrophysics Data System (ADS)
Monticello, D. A.; Reiman, A. H.; Watanabe, K. Y.; Nakajima, N.; Okamoto, M.
1997-11-01
The existence of bootstrap currents in both tokamaks and stellarators was confirmed, experimentally, more than ten years ago. Such currents can have significant effects on the equilibrium and stability of these MHD devices. In addition, stellarators, with the notable exception of W7-X, are predicted to have such large bootstrap currents that reliable equilibrium calculations require the self-consistent evaluation of bootstrap currents. Modeling of discharges which contain islands requires an algorithm that does not assume good surfaces. Only one of the two 3-D equilibrium codes that exist, PIES( Reiman, A. H., Greenside, H. S., Compt. Phys. Commun. 43), (1986)., can easily be modified to handle bootstrap current. Here we report on the coupling of the PIES 3-D equilibrium code and NIFS bootstrap code(Watanabe, K., et al., Nuclear Fusion 35) (1995), 335.
Social work and end-of-life decisions: self-determination and the common good.
Wesley, C A
1996-05-01
Client self-determination is the key element of NASW's policy statement about social work intervention in end-of-life decisions. However, both self-determination and the common good must be respected in social work practice and policy regarding end-of-life decisions. This article discusses self-determination in end-of-life decision making, ethical decision making and the NASW Code of Ethics, and professional ethics based on a balanced view of both self-determination and the common good. Recommendations for professional practice and social policy are offered.
Analysis and evaluation for consumer goods containing NORM in Korea.
Jang, Mee; Chung, Kun Ho; Lim, Jong Myoung; Ji, Young Yong; Kim, Chang Jong; Kang, Mun Ja
2017-08-01
We analyzed the consumer goods containing NORM by ICP-MS and evaluated the external dose. To evaluate the external dose, we assumed the small room model as irradiation scenario and calculated the specific effective dose rate using MCNPX code. The external doses for twenty goods are less than 1 mSv considering the specific effective dose rates and usage quantities. However, some of them have relatively high dose and the activity concentration limits are necessary as a screening tool. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
Vilches, M; García-Pareja, S; Guerrero, R; Anguiano, M; Lallena, A M
2009-09-01
In this work, recent results from experiments and simulations (with EGSnrc) performed by Ross et al. [Med. Phys. 35, 4121-4131 (2008)] on electron scattering by foils of different materials and thicknesses are compared to those obtained using several Monte Carlo codes. Three codes have been used: GEANT (version 3.21), Geant4 (version 9.1, patch03), and PENELOPE (version 2006). In the case of PENELOPE, mixed and fully detailed simulations have been carried out. Transverse dose distributions in air have been obtained in order to compare with measurements. The detailed PENELOPE simulations show excellent agreement with experiment. The calculations performed with GEANT and PENELOPE (mixed) agree with experiment within 3% except for the Be foil. In the case of Geant4, the distributions are 5% narrower compared to the experimental ones, though the agreement is very good for the Be foil. Transverse dose distribution in water obtained with PENELOPE (mixed) is 4% wider than those calculated by Ross et al. using EGSnrc and is 1% narrower than the transverse dose distributions in air, as considered in the experiment. All the codes give a reasonable agreement (within 5%) with the experimental results for all the material and thicknesses studied.
Multi-Kepler GPU vs. multi-Intel MIC for spin systems simulations
NASA Astrophysics Data System (ADS)
Bernaschi, M.; Bisson, M.; Salvadore, F.
2014-10-01
We present and compare the performances of two many-core architectures: the Nvidia Kepler and the Intel MIC both in a single system and in cluster configuration for the simulation of spin systems. As a benchmark we consider the time required to update a single spin of the 3D Heisenberg spin glass model by using the Over-relaxation algorithm. We present data also for a traditional high-end multi-core architecture: the Intel Sandy Bridge. The results show that although on the two Intel architectures it is possible to use basically the same code, the performances of a Intel MIC change dramatically depending on (apparently) minor details. Another issue is that to obtain a reasonable scalability with the Intel Phi coprocessor (Phi is the coprocessor that implements the MIC architecture) in a cluster configuration it is necessary to use the so-called offload mode which reduces the performances of the single system. As to the GPU, the Kepler architecture offers a clear advantage with respect to the previous Fermi architecture maintaining exactly the same source code. Scalability of the multi-GPU implementation remains very good by using the CPU as a communication co-processor of the GPU. All source codes are provided for inspection and for double-checking the results.
Composite hot subdwarf binaries - I. The spectroscopically confirmed sdB sample
NASA Astrophysics Data System (ADS)
Vos, Joris; Németh, Péter; Vučković, Maja; Østensen, Roy; Parsons, Steven
2018-01-01
Hot subdwarf-B (sdB) stars in long-period binaries are found to be on eccentric orbits, even though current binary-evolution theory predicts that these objects are circularized before the onset of Roche lobe overflow (RLOF). To increase our understanding of binary interaction processes during the RLOF phase, we started a long-term observing campaign to study wide sdB binaries. In this paper, we present a sample of composite binary sdBs, and the results of the spectral analysis of nine such systems. The grid search in stellar parameters (GSSP) code is used to derive atmospheric parameters for the cool companions. To cross-check our results and also to characterize the hot subdwarfs, we used the independent XTGRID code, which employs TLUSTY non-local thermodynamic equilibrium models to derive atmospheric parameters for the sdB component and PHOENIX synthetic spectra for the cool companions. The independent GSSP and XTGRID codes are found to show good agreement for three test systems that have atmospheric parameters available in the literature. Based on the rotational velocity of the companions, we make an estimate for the mass accreted during the RLOF phase and the minimum duration of that phase. We find that the mass transfer to the companion is minimal during the subdwarf formation.
TRIPOLI-4® - MCNP5 ITER A-lite neutronic model benchmarking
NASA Astrophysics Data System (ADS)
Jaboulay, J.-C.; Cayla, P.-Y.; Fausser, C.; Lee, Y.-K.; Trama, J.-C.; Li-Puma, A.
2014-06-01
The aim of this paper is to present the capability of TRIPOLI-4®, the CEA Monte Carlo code, to model a large-scale fusion reactor with complex neutron source and geometry. In the past, numerous benchmarks were conducted for TRIPOLI-4® assessment on fusion applications. Experiments (KANT, OKTAVIAN, FNG) analysis and numerical benchmarks (between TRIPOLI-4® and MCNP5) on the HCLL DEMO2007 and ITER models were carried out successively. In this previous ITER benchmark, nevertheless, only the neutron wall loading was analyzed, its main purpose was to present MCAM (the FDS Team CAD import tool) extension for TRIPOLI-4®. Starting from this work a more extended benchmark has been performed about the estimation of neutron flux, nuclear heating in the shielding blankets and tritium production rate in the European TBMs (HCLL and HCPB) and it is presented in this paper. The methodology to build the TRIPOLI-4® A-lite model is based on MCAM and the MCNP A-lite model (version 4.1). Simplified TBMs (from KIT) have been integrated in the equatorial-port. Comparisons of neutron wall loading, flux, nuclear heating and tritium production rate show a good agreement between the two codes. Discrepancies are mainly included in the Monte Carlo codes statistical error.
MHC class I-associated peptides derive from selective regions of the human genome.
Pearson, Hillary; Daouda, Tariq; Granados, Diana Paola; Durette, Chantal; Bonneil, Eric; Courcelles, Mathieu; Rodenbrock, Anja; Laverdure, Jean-Philippe; Côté, Caroline; Mader, Sylvie; Lemieux, Sébastien; Thibault, Pierre; Perreault, Claude
2016-12-01
MHC class I-associated peptides (MAPs) define the immune self for CD8+ T lymphocytes and are key targets of cancer immunosurveillance. Here, the goals of our work were to determine whether the entire set of protein-coding genes could generate MAPs and whether specific features influence the ability of discrete genes to generate MAPs. Using proteogenomics, we have identified 25,270 MAPs isolated from the B lymphocytes of 18 individuals who collectively expressed 27 high-frequency HLA-A,B allotypes. The entire MAP repertoire presented by these 27 allotypes covered only 10% of the exomic sequences expressed in B lymphocytes. Indeed, 41% of expressed protein-coding genes generated no MAPs, while 59% of genes generated up to 64 MAPs, often derived from adjacent regions and presented by different allotypes. We next identified several features of transcripts and proteins associated with efficient MAP production. From these data, we built a logistic regression model that predicts with good accuracy whether a gene generates MAPs. Our results show preferential selection of MAPs from a limited repertoire of proteins with distinctive features. The notion that the MHC class I immunopeptidome presents only a small fraction of the protein-coding genome for monitoring by the immune system has profound implications in autoimmunity and cancer immunology.
MHC class I–associated peptides derive from selective regions of the human genome
Pearson, Hillary; Granados, Diana Paola; Durette, Chantal; Bonneil, Eric; Courcelles, Mathieu; Rodenbrock, Anja; Laverdure, Jean-Philippe; Côté, Caroline; Thibault, Pierre
2016-01-01
MHC class I–associated peptides (MAPs) define the immune self for CD8+ T lymphocytes and are key targets of cancer immunosurveillance. Here, the goals of our work were to determine whether the entire set of protein-coding genes could generate MAPs and whether specific features influence the ability of discrete genes to generate MAPs. Using proteogenomics, we have identified 25,270 MAPs isolated from the B lymphocytes of 18 individuals who collectively expressed 27 high-frequency HLA-A,B allotypes. The entire MAP repertoire presented by these 27 allotypes covered only 10% of the exomic sequences expressed in B lymphocytes. Indeed, 41% of expressed protein-coding genes generated no MAPs, while 59% of genes generated up to 64 MAPs, often derived from adjacent regions and presented by different allotypes. We next identified several features of transcripts and proteins associated with efficient MAP production. From these data, we built a logistic regression model that predicts with good accuracy whether a gene generates MAPs. Our results show preferential selection of MAPs from a limited repertoire of proteins with distinctive features. The notion that the MHC class I immunopeptidome presents only a small fraction of the protein-coding genome for monitoring by the immune system has profound implications in autoimmunity and cancer immunology. PMID:27841757
A new free and open source tool for space plasma modeling.
NASA Astrophysics Data System (ADS)
Honkonen, I. J.
2014-12-01
I will present a new distributed memory parallel, free and open source computational model for studying space plasma. The model is written in C++ with emphasis on good software development practices and code readability without sacrificing serial or parallel performance. As such the model could be especially useful for education, for learning both (magneto)hydrodynamics (MHD) and computational model development. By using latest features of the C++ standard (2011) it has been possible to develop a very modular program which improves not only the readability of code but also the testability of the model and decreases the effort required to make changes to various parts of the program. Major parts of the model, functionality not directly related to (M)HD, have been outsourced to other freely available libraries which has reduced the development time of the model significantly. I will present an overview of the code architecture as well as details of different parts of the model and will show examples of using the model including preparing input files and plotting results. A multitude of 1-, 2- and 3-dimensional test cases are included in the software distribution and the results of, for example, Kelvin-Helmholtz, bow shock, blast wave and reconnection tests, will be presented.
A Secure Information Framework with APRQ Properties
NASA Astrophysics Data System (ADS)
Rupa, Ch.
2017-08-01
Internet of the things is the most trending topics in the digital world. Security issues are rampant. In the corporate or institutional setting, security risks are apparent from the outset. Market leaders are unable to use the cryptographic techniques due to their complexities. Hence many bits of private information, including ID, are readily available for third parties to see and to utilize. There is a need to decrease the complexity and increase the robustness of the cryptographic approaches. In view of this, a new cryptographic technique as good encryption pact with adjacency, random prime number and quantum code properties has been proposed. Here, encryption can be done by using quantum photons with gray code. This approach uses the concepts of physics and mathematics with no external key exchange to improve the security of the data. It also reduces the key attacks by generation of a key at the party side instead of sharing. This method makes the security more robust than with the existing approach. Important properties of gray code and quantum are adjacency property and different photons to a single bit (0 or 1). These can reduce the avalanche effect. Cryptanalysis of the proposed method shows that it is resistant to various attacks and stronger than the existing approaches.
Yang, Y M; Bednarz, B
2013-02-21
Following the proposal by several groups to integrate magnetic resonance imaging (MRI) with radiation therapy, much attention has been afforded to examining the impact of strong (on the order of a Tesla) transverse magnetic fields on photon dose distributions. The effect of the magnetic field on dose distributions must be considered in order to take full advantage of the benefits of real-time intra-fraction imaging. In this investigation, we compared the handling of particle transport in magnetic fields between two Monte Carlo codes, EGSnrc and Geant4, to analyze various aspects of their electromagnetic transport algorithms; both codes are well-benchmarked for medical physics applications in the absence of magnetic fields. A water-air-water slab phantom and a water-lung-water slab phantom were used to highlight dose perturbations near high- and low-density interfaces. We have implemented a method of calculating the Lorentz force in EGSnrc based on theoretical models in literature, and show very good consistency between the two Monte Carlo codes. This investigation further demonstrates the importance of accurate dosimetry for MRI-guided radiation therapy (MRIgRT), and facilitates the integration of a ViewRay MRIgRT system in the University of Wisconsin-Madison's Radiation Oncology Department.
NASA Astrophysics Data System (ADS)
Yang, Y. M.; Bednarz, B.
2013-02-01
Following the proposal by several groups to integrate magnetic resonance imaging (MRI) with radiation therapy, much attention has been afforded to examining the impact of strong (on the order of a Tesla) transverse magnetic fields on photon dose distributions. The effect of the magnetic field on dose distributions must be considered in order to take full advantage of the benefits of real-time intra-fraction imaging. In this investigation, we compared the handling of particle transport in magnetic fields between two Monte Carlo codes, EGSnrc and Geant4, to analyze various aspects of their electromagnetic transport algorithms; both codes are well-benchmarked for medical physics applications in the absence of magnetic fields. A water-air-water slab phantom and a water-lung-water slab phantom were used to highlight dose perturbations near high- and low-density interfaces. We have implemented a method of calculating the Lorentz force in EGSnrc based on theoretical models in literature, and show very good consistency between the two Monte Carlo codes. This investigation further demonstrates the importance of accurate dosimetry for MRI-guided radiation therapy (MRIgRT), and facilitates the integration of a ViewRay MRIgRT system in the University of Wisconsin-Madison's Radiation Oncology Department.
Concatenated Coding Using Trellis-Coded Modulation
NASA Technical Reports Server (NTRS)
Thompson, Michael W.
1997-01-01
In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.
NASA Astrophysics Data System (ADS)
Patel, Anita; Pulugundla, Gautam; Smolentsev, Sergey; Abdou, Mohamed; Bhattacharyay, Rajendraprasad
2018-04-01
Following the magnetohydrodynamic (MHD) code validation and verification proposal by Smolentsev et al. (Fusion Eng Des 100:65-72, 2015), we perform code to code and code to experiment comparisons between two computational solvers, FLUIDYN and HIMAG, which are presently considered as two of the prospective CFD tools for fusion blanket applications. In such applications, an electrically conducting breeder/coolant circulates in the blanket ducts in the presence of a strong plasma-confining magnetic field at high Hartmann numbers, it{Ha} (it{Ha}^2 is the ratio between electromagnetic and viscous forces) and high interaction parameters, it{N} (it{N} is the ratio of electromagnetic to inertial forces). The main objective of this paper is to provide the scientific and engineering community with common references to assist fusion researchers in the selection of adequate computational means to be used for blanket design and analysis. As an initial validation case, the two codes are applied to the classic problem of a laminar fully developed MHD flows in a rectangular duct. Both codes demonstrate a very good agreement with the analytical solution for it{Ha} up to 15, 000. To address the capabilities of the two codes to properly resolve complex geometry flows, we consider a case of three-dimensional developing MHD flow in a geometry comprising of a series of interconnected electrically conducting rectangular ducts. The computed electric potential distributions for two flows (Case A) it{Ha}=515, it{N}=3.2 and (Case B) it{Ha}=2059, it{N}=63.8 are in very good agreement with the experimental data, while the comparisons for the MHD pressure drop are still unsatisfactory. To better interpret the observed differences, the obtained numerical data are analyzed against earlier theoretical and experimental studies for flows that involve changes in the relative orientation between the flow and the magnetic field.
NASA Astrophysics Data System (ADS)
Jaboulay, Jean-Charles; Brun, Emeric; Hugot, François-Xavier; Huynh, Tan-Dat; Malouch, Fadhel; Mancusi, Davide; Tsilanizara, Aime
2017-09-01
After fission or fusion reactor shutdown the activated structure emits decay photons. For maintenance operations the radiation dose map must be established in the reactor building. Several calculation schemes have been developed to calculate the shutdown dose rate. These schemes are widely developed in fusion application and more precisely for the ITER tokamak. This paper presents the rigorous-two-steps scheme implemented at CEA. It is based on the TRIPOLI-4® Monte Carlo code and the inventory code MENDEL. The ITER shutdown dose rate benchmark has been carried out, results are in a good agreement with the other participant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dearing, J F; Rose, S D; Nelson, W R
The predicted computational results of two well-known sub-channel analysis codes, COBRA-III-C and SABRE-I (wire wrap version), have been evaluated by comparison with steady state temperature data from the THORS Facility at ORNL. Both codes give good predictions of transverse and axial temperatures when compared with wire wrap thermocouple data. The crossflow velocity profiles predicted by these codes are similar which is encouraging since the wire wrap models are based on different assumptions.
The Magnetic Reconnection Code: an AMR-based fully implicit simulation suite
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Bhattacharjee, A.; Ng, C.-S.
2006-12-01
Extended MHD models, which incorporate two-fluid effects, are promising candidates to enhance understanding of collisionless reconnection phenomena in laboratory, space and astrophysical plasma physics. In this paper, we introduce two simulation codes in the Magnetic Reconnection Code suite which integrate reduced and full extended MHD models. Numerical integration of these models comes with two challenges: Small-scale spatial structures, e.g. thin current sheets, develop and must be well resolved by the code. Adaptive mesh refinement (AMR) is employed to provide high resolution where needed while maintaining good performance. Secondly, the two-fluid effects in extended MHD give rise to dispersive waves, which lead to a very stringent CFL condition for explicit codes, while reconnection happens on a much slower time scale. We use a fully implicit Crank--Nicholson time stepping algorithm. Since no efficient preconditioners are available for our system of equations, we instead use a direct solver to handle the inner linear solves. This requires us to actually compute the Jacobian matrix, which is handled by a code generator that calculates the derivative symbolically and then outputs code to calculate it.
Evaluation of CFETR as a Fusion Nuclear Science Facility using multiple system codes
NASA Astrophysics Data System (ADS)
Chan, V. S.; Costley, A. E.; Wan, B. N.; Garofalo, A. M.; Leuer, J. A.
2015-02-01
This paper presents the results of a multi-system codes benchmarking study of the recently published China Fusion Engineering Test Reactor (CFETR) pre-conceptual design (Wan et al 2014 IEEE Trans. Plasma Sci. 42 495). Two system codes, General Atomics System Code (GASC) and Tokamak Energy System Code (TESC), using different methodologies to arrive at CFETR performance parameters under the same CFETR constraints show that the correlation between the physics performance and the fusion performance is consistent, and the computed parameters are in good agreement. Optimization of the first wall surface for tritium breeding and the minimization of the machine size are highly compatible. Variations of the plasma currents and profiles lead to changes in the required normalized physics performance, however, they do not significantly affect the optimized size of the machine. GASC and TESC have also been used to explore a lower aspect ratio, larger volume plasma taking advantage of the engineering flexibility in the CFETR design. Assuming the ITER steady-state scenario physics, the larger plasma together with a moderately higher BT and Ip can result in a high gain Qfus ˜ 12, Pfus ˜ 1 GW machine approaching DEMO-like performance. It is concluded that the CFETR baseline mode can meet the minimum goal of the Fusion Nuclear Science Facility (FNSF) mission and advanced physics will enable it to address comprehensively the outstanding critical technology gaps on the path to a demonstration reactor (DEMO). Before proceeding with CFETR construction steady-state operation has to be demonstrated, further development is needed to solve the divertor heat load issue, and blankets have to be designed with tritium breeding ratio (TBR) >1 as a target.
Is compassion essential to nursing practice?
Hem, Marit Helene; Heggen, Kristin
2004-01-01
The Norwegian Nurses' Association recently (2001) approved a new code of ethics that included compassion as one of the basic values in nursing care. This paper examines the idea of compassion in the context of the Bible story of the Good Samaritan using an analysis of qualitative data from nurses' clinical work with psychiatric patients. The aim is to show how the idea of compassion challenges nursing practice. Thereafter, the paper discusses the benefits of and premises for compassion in care work. The results show that nurses tend not to be guided by compassion in their work with patients. The organisation of the day-to-day work in the hospital ward, the division of labour between nurses and doctors, and the nurses' approach to nursing were identified as influencing this tendency. The study shows that compassion is a radical concept with a potential to promote greater respect for patients' dignity.
Definition of a prospective payment system to reimburse emergency departments.
Levaggi, Rosella; Montefiori, Marcello
2013-10-11
Payers are increasingly turning to Prospective Payment Systems (PPSs) because they incentivize efficiency, but their application to emergency departments (EDs) is difficult because of the high level of uncertainty and variability in the cost of treating each patient.To the best of our knowledge, our work represents the first attempt at defining a PPS for this part of hospital activity. Data were specifically collected for this study and relate to 1011 patients who were triaged at an ED of a major Italian hospital, during 1 week in December 2010.The cost for each patient was analytically estimated by adding up several components: 1) physician and other staff costs that were imputed on the basis of the time each physician claimed to have spent treating the patient; 2) the cost for each test/treatment each patient actually underwent; 3) overhead costs, shared among patients using the time elapsed between first examination and discharge from the ED. The distribution of costs by triage code shows that, although the average cost increases across the four triage groups, the variance within each code is quite high. The maximum cost for a yellow code is €1074.7, compared with €680 for red, the most serious code. Using cluster analysis, the red code cluster is enveloped by yellow, and their costs are therefore indistinguishable, while green codes span all cost groups. This suggests that triage code alone is not a good proxy for the patient cost, and that other cost drivers need to be included. Crude triage codes cannot be used to define PPSs because they are not sufficiently correlated with costs and are characterized by large variances. However, if combined with other information, such as the number of laboratory and non-laboratory tests/examinations, it is possible to define cost groups that are sufficiently homogeneous to be reimbursed prospectively. This should discourage strategic behavior and allow the ED to break even or create profits, which can be reinvested to improve services. The study provides health policy administrators with a new and feasible tool to implement prospective payment for EDs, and improve planning and cost control.
Comparison of Measured and Block Structured Simulations for the F-16XL Aircraft
NASA Technical Reports Server (NTRS)
Boelens, O. J.; Badcock, K. J.; Elmilgui, A.; Abdol-Hamid, K. S.; Massey, S. J.
2008-01-01
This article presents a comparison of the predictions of three RANS codes for flight conditions of the F-16XL aircraft which feature vortical flow. The three codes, ENSOLV, PMB and PAB3D, solve on structured multi-block grids. Flight data for comparison was available in the form of surface pressures, skin friction, boundary layer data and photographs of tufts. The three codes provided predictions which were consistent with expectations based on the turbulence modelling used, which was k- , k- with vortex corrections and an Algebraic Stress Model. The agreement with flight data was good, with the exception of the outer wing primary vortex strength. The confidence in the application of the CFD codes to complex fighter configurations increased significantly through this study.
FPGA implementation of advanced FEC schemes for intelligent aggregation networks
NASA Astrophysics Data System (ADS)
Zou, Ding; Djordjevic, Ivan B.
2016-02-01
In state-of-the-art fiber-optics communication systems the fixed forward error correction (FEC) and constellation size are employed. While it is important to closely approach the Shannon limit by using turbo product codes (TPC) and low-density parity-check (LDPC) codes with soft-decision decoding (SDD) algorithm; rate-adaptive techniques, which enable increased information rates over short links and reliable transmission over long links, are likely to become more important with ever-increasing network traffic demands. In this invited paper, we describe a rate adaptive non-binary LDPC coding technique, and demonstrate its flexibility and good performance exhibiting no error floor at BER down to 10-15 in entire code rate range, by FPGA-based emulation, making it a viable solution in the next-generation high-speed intelligent aggregation networks.
Reaction path of energetic materials using THOR code
NASA Astrophysics Data System (ADS)
Durães, L.; Campos, J.; Portugal, A.
1998-07-01
The method of predicting reaction path, using THOR code, allows for isobar and isochor adiabatic combustion and CJ detonation regimes, the calculation of the composition and thermodynamic properties of reaction products of energetic materials. THOR code assumes the thermodynamic equilibria of all possible products, for the minimum Gibbs free energy, using HL EoS. The code allows the possibility of estimating various sets of reaction products, obtained successively by the decomposition of the original reacting compound, as a function of the released energy. Two case studies of thermal decomposition procedure were selected, calculated and discussed—pure Ammonium Nitrate and its based explosive ANFO, and Nitromethane—because their equivalence ratio is respectively lower, near and greater than the stoicheiometry. Predictions of reaction path are in good correlation with experimental values, proving the validity of proposed method.
Hyperbolic and semi-hyperbolic surface codes for quantum storage
NASA Astrophysics Data System (ADS)
Breuckmann, Nikolas P.; Vuillot, Christophe; Campbell, Earl; Krishna, Anirudh; Terhal, Barbara M.
2017-09-01
We show how a hyperbolic surface code could be used for overhead-efficient quantum storage. We give numerical evidence for a noise threshold of 1.3 % for the \\{4,5\\}-hyperbolic surface code in a phenomenological noise model (as compared with 2.9 % for the toric code). In this code family, parity checks are of weight 4 and 5, while each qubit participates in four different parity checks. We introduce a family of semi-hyperbolic codes that interpolate between the toric code and the \\{4,5\\}-hyperbolic surface code in terms of encoding rate and threshold. We show how these hyperbolic codes outperform the toric code in terms of qubit overhead for a target logical error probability. We show how Dehn twists and lattice code surgery can be used to read and write individual qubits to this quantum storage medium.
NASA Astrophysics Data System (ADS)
Haneda, K.
2016-04-01
The purpose of this study was to estimate an impact on radical effect in the proton beams using a combined approach with physical data and gel data. The study used two dosimeters: ionization chambers and polymer gel dosimeters. Polymer gel dosimeters have specific advantages when compared to other dosimeters. They can measure chemical reaction and they are at the same time a phantom that can map in three dimensions continuously and easily. First, a depth-dose curve for a 210 MeV proton beam measured using an ionization chamber and a gel dosimeter. Second, the spatial distribution of the physical dose was calculated by Monte Carlo code system PHITS: To verify of the accuracy of Monte Carlo calculation, and the calculation results were compared with experimental data of the ionization chamber. Last, to evaluate of the rate of the radical effect against the physical dose. The simulation results were compared with the measured depth-dose distribution and showed good agreement. The spatial distribution of a gel dose with threshold LET value of proton beam was calculated by the same simulation code. Then, the relative distribution of the radical effect was calculated from the physical dose and gel dose. The relative distribution of the radical effect was calculated at each depth as the quotient of relative dose obtained using physical and gel dose. The agreement between the relative distributions of the gel dosimeter and Radical effect was good at the proton beams.
PopCORN: Hunting down the differences between binary population synthesis codes
NASA Astrophysics Data System (ADS)
Toonen, S.; Claeys, J. S. W.; Mennekens, N.; Ruiter, A. J.
2014-02-01
Context. Binary population synthesis (BPS) modelling is a very effective tool to study the evolution and properties of various types of close binary systems. The uncertainty in the parameters of the model and their effect on a population can be tested in a statistical way, which then leads to a deeper understanding of the underlying (sometimes poorly understood) physical processes involved. Several BPS codes exist that have been developed with different philosophies and aims. Although BPS has been very successful for studies of many populations of binary stars, in the particular case of the study of the progenitors of supernovae Type Ia, the predicted rates and ZAMS progenitors vary substantially between different BPS codes. Aims: To understand the predictive power of BPS codes, we study the similarities and differences in the predictions of four different BPS codes for low- and intermediate-mass binaries. We investigate the differences in the characteristics of the predicted populations, and whether they are caused by different assumptions made in the BPS codes or by numerical effects, e.g. a lack of accuracy in BPS codes. Methods: We compare a large number of evolutionary sequences for binary stars, starting with the same initial conditions following the evolution until the first (and when applicable, the second) white dwarf (WD) is formed. To simplify the complex problem of comparing BPS codes that are based on many (often different) assumptions, we equalise the assumptions as much as possible to examine the inherent differences of the four BPS codes. Results: We find that the simulated populations are similar between the codes. Regarding the population of binaries with one WD, there is very good agreement between the physical characteristics, the evolutionary channels that lead to the birth of these systems, and their birthrates. Regarding the double WD population, there is a good agreement on which evolutionary channels exist to create double WDs and a rough agreement on the characteristics of the double WD population. Regarding which progenitor systems lead to a single and double WD system and which systems do not, the four codes agree well. Most importantly, we find that for these two populations, the differences in the predictions from the four codes are not due to numerical differences, but because of different inherent assumptions. We identify critical assumptions for BPS studies that need to be studied in more detail. Appendices are available in electronic form at http://www.aanda.org
Quantum dynamics of tunneling dominated reactions at low temperatures
NASA Astrophysics Data System (ADS)
Hazra, Jisha; Balakrishnan, N.
2015-05-01
We report a quantum dynamics study of the Li + HF → LiF + H reaction at low temperatures of interest to cooling and trapping experiments. Contributions from non-zero partial waves are analyzed and results show narrow resonances in the energy dependence of the cross section that survive partial wave summation. The computations are performed using the ABC code and a simple modification of the ABC code that enables separate energy cutoffs for the reactant and product rovibrational energy levels is found to dramatically reduce the basis set size and computational expense. Results obtained using two ab initio electronic potential energy surfaces for the LiHF system show strong sensitivity to the choice of the potential. In particular, small differences in the barrier heights of the two potential surfaces are found to dramatically influence the reaction cross sections at low energies. Comparison with recent measurements of the reaction cross section (Bobbenkamp et al 2011 J. Chem. Phys. 135 204306) shows similar energy dependence in the threshold regime and an overall good agreement with experimental data compared to previous theoretical results. Also, usefulness of a recently introduced method for ultracold reactions that employ the quantum close-coupling method at short-range and the multichannel quantum defect theory at long-range, is demonstrated in accurately evaluating product state-resolved cross sections for D + H2 and H + D2 reactions.
Using Optimization to Improve Test Planning
2017-09-01
friendly and to display the output differently, the test and evaluation test schedule optimization model would be a good tool for the test and... evaluation schedulers. 14. SUBJECT TERMS schedule optimization, test planning 15. NUMBER OF PAGES 223 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...make the input more user-friendly and to display the output differently, the test and evaluation test schedule optimization model would be a good tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Massimo, F., E-mail: francesco.massimo@ensta-paristech.fr; Dipartimento SBAI, Università di Roma “La Sapienza“, Via A. Scarpa 14, 00161 Roma; Atzeni, S.
Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for themore » solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.« less
Hallberg, Ulrika; Klingberg, Gunilla
2005-10-01
Good collaboration between medical and dental care is essential to provide not only good oral health care, but also more holistic care for children with disabilities. The aim was to explore and describe medical health care professionals' assessments and considerations of orofacial problems and treatment needs in children with disabilities and in their families. In-depth interviews focusing on orofacial function were carried out with 17 medical health care employees. Interviews were transcribed verbatim and analysed in open and focused (selective) coding processes according to grounded theory. A core category was identified and named focusing on basic needs, showing that oral health care assessment was not on the agenda of medical health care professionals, but was instead viewed as a responsibility of parents or dentists. This study shows that oral health issues are not fully integrated in the medical care of children with disabilities. The omission of oral health issues from the medical agenda implies a risk of oral health problems in children with disabilities. To put the oral cavity and oral health on the medical agenda, dentists need to influence the undergraduate training of medical professionals and to initiate co-operation with the medical care system.
Deformations of thick two-material cylinder under axially varying radial pressure
NASA Technical Reports Server (NTRS)
Patel, Y. A.
1976-01-01
Stresses and deformations in thick, short, composite cylinder subjected to axially varying radial pressure are studied. Effect of slippage at the interface is examined. In the NASTRAN finite element model, multipoint constraint feature is utilized. Results are compared with theoretical analysis and SAP-IV computer code. Results from NASTRAN computer code are in good agreement with the analytical solutions. Results suggest a considerable influence of interfacial slippage on the axial bending stresses in the cylinder.
1985-09-01
Code 0 Physics (Calculus-Based) or Physical Science niscioline 0----------------------------------------- lR averaqe...opportunity for fficers with inadequate math- ematical and physical science backgrounds to establish a good math foundation to be able to gualify for a...technical curricu2um [Ref. 5: page 36]. There is also a six week refresher available that is designed to rapidly cover the calculus and physics
1994-09-01
650 B.C. in Asia Minor, coins were developed and used in acquiring goods and services. In France, during the eighteenth century, paper money made its... counterfeited . [INFO94, p. 23] Other weaknesses of bar code technology include limited data storage capability based on the bar code symbology used when...extremely accurate, with calculated error rates as low as 1 in 100 trillion, and are difficult to counterfeit . Strong magnetic fields cannot erase RF
He, Yi; Xiao, Yi; Liwo, Adam; Scheraga, Harold A
2009-10-01
We explored the energy-parameter space of our coarse-grained UNRES force field for large-scale ab initio simulations of protein folding, to obtain good initial approximations for hierarchical optimization of the force field with new virtual-bond-angle bending and side-chain-rotamer potentials which we recently introduced to replace the statistical potentials. 100 sets of energy-term weights were generated randomly, and good sets were selected by carrying out replica-exchange molecular dynamics simulations of two peptides with a minimal alpha-helical and a minimal beta-hairpin fold, respectively: the tryptophan cage (PDB code: 1L2Y) and tryptophan zipper (PDB code: 1LE1). Eight sets of parameters produced native-like structures of these two peptides. These eight sets were tested on two larger proteins: the engrailed homeodomain (PDB code: 1ENH) and FBP WW domain (PDB code: 1E0L); two sets were found to produce native-like conformations of these proteins. These two sets were tested further on a larger set of nine proteins with alpha or alpha + beta structure and found to locate native-like structures of most of them. These results demonstrate that, in addition to finding reasonable initial starting points for optimization, an extensive search of parameter space is a powerful method to produce a transferable force field. Copyright 2009 Wiley Periodicals, Inc.
How collaboration in therapy becomes therapeutic: the therapeutic collaboration coding system.
Ribeiro, Eugénia; Ribeiro, António P; Gonçalves, Miguel M; Horvath, Adam O; Stiles, William B
2013-09-01
The quality and strength of the therapeutic collaboration, the core of the alliance, is reliably associated with positive therapy outcomes. The urgent challenge for clinicians and researchers is constructing a conceptual framework to integrate the dialectical work that fosters collaboration, with a model of how clients make progress in therapy. We propose a conceptual account of how collaboration in therapy becomes therapeutic. In addition, we report on the construction of a coding system - the therapeutic collaboration coding system (TCCS) - designed to analyse and track on a moment-by-moment basis the interaction between therapist and client. Preliminary evidence is presented regarding the coding system's psychometric properties. The TCCS evaluates each speaking turn and assesses whether and how therapists are working within the client's therapeutic zone of proximal development, defined as the space between the client's actual therapeutic developmental level and their potential developmental level that can be reached in collaboration with the therapist. We applied the TCCS to five cases: a good and a poor outcome case of narrative therapy, a good and a poor outcome case of cognitive-behavioural therapy, and a dropout case of narrative therapy. The TCCS offers markers that may help researchers better understand the therapeutic collaboration on a moment-to-moment basis and may help therapists better regulate the relationship. © 2012 The British Psychological Society.
Hinds, Pamela S.; Oakes, Linda L.; Hicks, Judy; Powell, Brent; Srivastava, Deo Kumar; Spunt, Sheri L.; Harper, JoAnn; Baker, Justin N.; West, Nancy K.; Furman, Wayne L.
2009-01-01
Purpose When a child's cancer progresses beyond current treatment capability, the parents are likely to participate in noncurative treatment decision making. One factor that helps parents to make these decisions and remain satisfied with them afterward is deciding as they believe a good parent would decide. Because being a good parent to a child with incurable cancer has not been formally defined, we conducted a descriptive study to develop such a definition. Methods In face-to-face interviews, 62 parents who had made one of three decisions (enrollment on a phase I study, do not resuscitate status, or terminal care) for 58 patients responded to two open-ended questions about the definition of a good parent and about how clinicians could help them fulfill this role. For semantic content analysis of the interviews, a rater panel trained in this method independently coded all responses. Inter-rater reliability was excellent. Results Among the aspects of the definition qualitatively identified were making informed, unselfish decisions in the child's best interest, remaining at the child's side, showing the child that he is cherished, teaching the child to make good decisions, advocating for the child with the staff, and promoting the child's health. We also identified 15 clinician strategies that help parents be a part of making these decisions on behalf of a child with advanced cancer. Conclusion The definition and the strategies may be used to guide clinicians in helping parents fulfill the good parent role and take comfort afterward in having acted as a good parent. PMID:19805693
Student perceptions of a good teacher: the gender perspective.
Jules, V; Kutnick, P
1997-12-01
A large-scale survey of pupils' perceptions of a good teacher in the Caribbean republic of Trinidad and Tobago is reported. An essay-based, interpretative mode of research was used to elicit and identify constructs used by boys and girls. The study explores similarities and differences between boys and girls in their perceptions of a good teacher, in a society where girls achieve superior academic performance (than boys). A total of 1756 pupils and students aged between 8 and 16 provided the sample, which was proportional, stratified, clustered. Within these constraints classrooms were randomly selected to be representative of primary and secondary schools across the two islands. Altogether 1539 essays and 217 interviews were content analysed, coded for age development and compared between boys and girls. Content items identified by the pupils were logically grouped into: physical and personal characteristics of the teacher, quality of the relationship between the teacher and pupil, control of behaviour by the teacher, descriptions of the teaching process, and educational and other outcomes obtained by pupils due to teacher efforts. Female pupils identified more good teacher concepts at all age levels than males. There was some commonality between the sexes in concepts regarding interpersonal relationships and inclusiveness in the good teachers' teaching practices and boys showed significantly greater concerns regarding teacher control and use of punishment. Males as young as 8 years stated that good teachers should be sensitive to their needs. Only among the 16-year-old males were males noted as good teachers. Consideration is given to the roles of male and female teachers, how their classroom actions may set the basis for future success (or failure) of their pupils, and the needs of pupils with regard to teacher support within developing and developed countries.
NASA Astrophysics Data System (ADS)
Kawamura, Teruo; Kishiyama, Yoshihisa; Higuchi, Kenichi; Sawahashi, Mamoru
In the Evolved UTRA (UMTS Terrestrial Radio Access) uplink, single-carrier frequency division multiple access (SC-FDMA) radio access was adopted owing to its advantageous low peak-to-average power ratio (PAPR) feature, which leads to wide coverage area provisioning with limited peak transmission power of user equipments. This paper proposes orthogonal pilot channel generation using the combination of FDMA and CDMA in the SC-FDMA-based Evolved UTRA uplink. In the proposed method, we employ distributed FDMA transmission for simultaneous accessing users with different transmission bandwidths, and employ CDMA transmission for simultaneous accessing users with identical transmission bandwidth. Moreover, we apply a code sequence with a good auto-correlation property such as a Constant Amplitude Zero Auto-Correlation (CAZAC) sequence employing a cyclic shift to increase the number of sequences. Simulation results show that the average packet error rate performance using an orthogonal pilot channel with the combination of FDMA and CDMA in a six-user environment, i. e., four users each with a 1.25-MHz transmission bandwidth and two users each with a 5-MHz transmission bandwidth, employing turbo coding with the coding r of R=1/2 and QPSK and 16QAM data modulation coincides well with that in a single-user environment with the same transmission bandwidth. We show that the proposed orthogonal pilot channel structure using the combination of distributed FDMA and CDMA transmissions and the application of the CAZAC sequence is effective in the SC-FDMA-based Evolved UTRA uplink.
Domain decomposition and matching for time-domain analysis of motions of ships advancing in head sea
NASA Astrophysics Data System (ADS)
Tang, Kai; Zhu, Ren-chuan; Miao, Guo-ping; Fan, Ju
2014-08-01
A domain decomposition and matching method in the time-domain is outlined for simulating the motions of ships advancing in waves. The flow field is decomposed into inner and outer domains by an imaginary control surface, and the Rankine source method is applied to the inner domain while the transient Green function method is used in the outer domain. Two initial boundary value problems are matched on the control surface. The corresponding numerical codes are developed, and the added masses, wave exciting forces and ship motions advancing in head sea for Series 60 ship and S175 containership, are presented and verified. A good agreement has been obtained when the numerical results are compared with the experimental data and other references. It shows that the present method is more efficient because of the panel discretization only in the inner domain during the numerical calculation, and good numerical stability is proved to avoid divergence problem regarding ships with flare.
An Anisotropic A posteriori Error Estimator for CFD
NASA Astrophysics Data System (ADS)
Feijóo, Raúl A.; Padra, Claudio; Quintana, Fernando
In this article, a robust anisotropic adaptive algorithm is presented, to solve compressible-flow equations using a stabilized CFD solver and automatic mesh generators. The association includes a mesh generator, a flow solver, and an a posteriori error-estimator code. The estimator was selected among several choices available (Almeida et al. (2000). Comput. Methods Appl. Mech. Engng, 182, 379-400; Borges et al. (1998). "Computational mechanics: new trends and applications". Proceedings of the 4th World Congress on Computational Mechanics, Bs.As., Argentina) giving a powerful computational tool. The main aim is to capture solution discontinuities, in this case, shocks, using the least amount of computational resources, i.e. elements, compatible with a solution of good quality. This leads to high aspect-ratio elements (stretching). To achieve this, a directional error estimator was specifically selected. The numerical results show good behavior of the error estimator, resulting in strongly-adapted meshes in few steps, typically three or four iterations, enough to capture shocks using a moderate and well-distributed amount of elements.
Indonesian journalistic competitions: tribute or threat for press practice
NASA Astrophysics Data System (ADS)
Dewi, P. A. R.; Aji, G. G.; Sukardani, P. S.
2018-01-01
This research aims to Investigate journalists’ understanding toward glittering generalities practice - positive and good coverage of the caused by the Journalistic contests, the prices are attracting them, Reviews their motives to Participate, and their beliefs about ethics al thing. It is also retrieve how the chief editor, as a decision maker in the mass media taking his responsibility on the glittering news. This research uses a case study method by conducting indepth interviews on journalists, editors, and professional the alliance to collect the data and analyse it based on critical paradigm. The results show that journalist believe that competition is good, and there is no violation as long as the stick to the press code of conduct, but for chief editors and professional alliance, they begin to aware the damage of the contest. The findings of this work will be valuable to develop critical thinking of press workers and promote awareness to the society to control the media practice.
NASA Astrophysics Data System (ADS)
Amiraux, Mathieu
Rotorcraft Blade-Vortex Interaction (BVI) remains one of the most challenging flow phenomenon to simulate numerically. Over the past decade, the HART-II rotor test and its extensive experimental dataset has been a major database for validation of CFD codes. Its strong BVI signature, with high levels of intrusive noise and vibrations, makes it a difficult test for computational methods. The main challenge is to accurately capture and preserve the vortices which interact with the rotor, while predicting correct blade deformations and loading. This doctoral dissertation presents the application of a coupled CFD/CSD methodology to the problem of helicopter BVI and compares three levels of fidelity for aerodynamic modeling: a hybrid lifting-line/free-wake (wake coupling) method, with modified compressible unsteady model; a hybrid URANS/free-wake method; and a URANS-based wake capturing method, using multiple overset meshes to capture the entire flow field. To further increase numerical correlation, three helicopter fuselage models are implemented in the framework. The first is a high resolution 3D GPU panel code; the second is an immersed boundary based method, with 3D elliptic grid adaption; the last one uses a body-fitted, curvilinear fuselage mesh. The main contribution of this work is the implementation and systematic comparison of multiple numerical methods to perform BVI modeling. The trade-offs between solution accuracy and computational cost are highlighted for the different approaches. Various improvements have been made to each code to enhance physical fidelity, while advanced technologies, such as GPU computing, have been employed to increase efficiency. The resulting numerical setup covers all aspects of the simulation creating a truly multi-fidelity and multi-physics framework. Overall, the wake capturing approach showed the best BVI phasing correlation and good blade deflection predictions, with slightly under-predicted aerodynamic loading magnitudes. However, it proved to be much more expensive than the other two methods. Wake coupling with RANS solver had very good loading magnitude predictions, and therefore good acoustic intensities, with acceptable computational cost. The lifting-line based technique often had over-predicted aerodynamic levels, due to the degree of empiricism of the model, but its very short run-times, thanks to GPU technology, makes it a very attractive approach.
Quantum error-correcting codes from algebraic geometry codes of Castle type
NASA Astrophysics Data System (ADS)
Munuera, Carlos; Tenório, Wanderson; Torres, Fernando
2016-10-01
We study algebraic geometry codes producing quantum error-correcting codes by the CSS construction. We pay particular attention to the family of Castle codes. We show that many of the examples known in the literature in fact belong to this family of codes. We systematize these constructions by showing the common theory that underlies all of them.
Orion Parachute Riser Cutter Development
NASA Technical Reports Server (NTRS)
Oguz, Sirri; Salazar, Frank
2011-01-01
This paper presents the tests and analytical approach used on the development of a steel riser cutter for the CEV Parachute Assembly System (CPAS) used on the Orion crew module. Figure 1 shows the riser cutter and the steel riser bundle which consists of six individual cables. Due to the highly compressed schedule, initial unavailability of the riser material and the Orion Forward Bay mechanical constraints, JSC primarily relied on a combination of internal ballistics analysis and LS-DYNA simulation for this project. Various one dimensional internal ballistics codes that use standard equation of state and conservation of energy have commonly used in the development of CAD devices for initial first order estimates and as an enhancement to the test program. While these codes are very accurate for propellant performance prediction, they usually lack a fully defined kinematic model for dynamic predictions. A simple piston device can easily and accurately be modeled using an equation of motion. However, the accuracy of analytical models is greatly reduced on more complicated devices with complex external loads, nonlinear trajectories or unique unlocking features. A 3D finite element model of CAD device with all critical features included can vastly improve the analytical ballistic predictions when it is used as a supplement to the ballistic code. During this project, LS-DYNA structural 3D model was used to predict the riser resisting load that was needed for the ballistic code. A Lagrangian model with eroding elements shown in Figure 2 was used for the blade, steel riser and the anvil. The riser material failure strain was fine tuned by matching the dent depth on the anvil with the actual test data. LS-DYNA model was also utilized to optimize the blade tip design for the most efficient cut. In parallel, the propellant type and the amount were determined by using CADPROG internal ballistics code. Initial test results showed a good match with LS-DYNA and CADPROG simulations. Final paper will present a detailed roadmap from initial ballistic modeling and LS-DYNA simulation to the performance testing. Blade shape optimization study will also be presented.
Orso, Massimiliano; Serraino, Diego; Abraha, Iosief; Fusco, Mario; Giovannini, Gianni; Casucci, Paola; Cozzolino, Francesco; Granata, Annalisa; Gobbato, Michele; Stracci, Fabrizio; Ciullo, Valerio; Vitale, Maria Francesca; Eusebi, Paolo; Orlandi, Walter; Montedori, Alessandro; Bidoli, Ettore
2018-04-20
To assess the accuracy of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes in identifying subjects with melanoma. A diagnostic accuracy study comparing melanoma ICD-9-CM codes (index test) with medical chart (reference standard). Case ascertainment was based on neoplastic lesion of the skin and a histological diagnosis from a primary or metastatic site positive for melanoma. Administrative databases from Umbria Region, Azienda Sanitaria Locale (ASL) Napoli 3 Sud (NA) and Friuli Venezia Giulia (FVG) Region. 112, 130 and 130 cases (subjects with melanoma) were randomly selected from Umbria, NA and FVG, respectively; 94 non-cases (subjects without melanoma) were randomly selected from each unit. Sensitivity and specificity for ICD-9-CM code 172.x located in primary position. The most common melanoma subtype was malignant melanoma of skin of trunk, except scrotum (ICD-9-CM code: 172.5), followed by malignant melanoma of skin of lower limb, including hip (ICD-9-CM code: 172.7). The mean age of the patients ranged from 60 to 61 years. Most of the diagnoses were performed in surgical departments.The sensitivities were 100% (95% CI 96% to 100%) for Umbria, 99% (95% CI 94% to 100%) for NA and 98% (95% CI 93% to 100%) for FVG. The specificities were 88% (95% CI 80% to 93%) for Umbria, 77% (95% CI 69% to 85%) for NA and 79% (95% CI 71% to 86%) for FVG. The case definition for melanoma based on clinical or instrumental diagnosis, confirmed by histological examination, showed excellent sensitivities and good specificities in the three operative units. Administrative databases from the three operative units can be used for epidemiological and outcome research of melanoma. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Simulations of pattern dynamics for reaction-diffusion systems via SIMULINK
2014-01-01
Background Investigation of the nonlinear pattern dynamics of a reaction-diffusion system almost always requires numerical solution of the system’s set of defining differential equations. Traditionally, this would be done by selecting an appropriate differential equation solver from a library of such solvers, then writing computer codes (in a programming language such as C or Matlab) to access the selected solver and display the integrated results as a function of space and time. This “code-based” approach is flexible and powerful, but requires a certain level of programming sophistication. A modern alternative is to use a graphical programming interface such as Simulink to construct a data-flow diagram by assembling and linking appropriate code blocks drawn from a library. The result is a visual representation of the inter-relationships between the state variables whose output can be made completely equivalent to the code-based solution. Results As a tutorial introduction, we first demonstrate application of the Simulink data-flow technique to the classical van der Pol nonlinear oscillator, and compare Matlab and Simulink coding approaches to solving the van der Pol ordinary differential equations. We then show how to introduce space (in one and two dimensions) by solving numerically the partial differential equations for two different reaction-diffusion systems: the well-known Brusselator chemical reactor, and a continuum model for a two-dimensional sheet of human cortex whose neurons are linked by both chemical and electrical (diffusive) synapses. We compare the relative performances of the Matlab and Simulink implementations. Conclusions The pattern simulations by Simulink are in good agreement with theoretical predictions. Compared with traditional coding approaches, the Simulink block-diagram paradigm reduces the time and programming burden required to implement a solution for reaction-diffusion systems of equations. Construction of the block-diagram does not require high-level programming skills, and the graphical interface lends itself to easy modification and use by non-experts. PMID:24725437