Sample records for distributed space-time coding

  1. Connectivity Restoration in Wireless Sensor Networks via Space Network Coding.

    PubMed

    Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing

    2017-04-20

    The problem of finding the number and optimal positions of relay nodes for restoring the network connectivity in partitioned Wireless Sensor Networks (WSNs) is Non-deterministic Polynomial-time hard (NP-hard) and thus heuristic methods are preferred to solve it. This paper proposes a novel polynomial time heuristic algorithm, namely, Relay Placement using Space Network Coding (RPSNC), to solve this problem, where Space Network Coding, also called Space Information Flow (SIF), is a new research paradigm that studies network coding in Euclidean space, in which extra relay nodes can be introduced to reduce the cost of communication. Unlike contemporary schemes that are often based on Minimum Spanning Tree (MST), Euclidean Steiner Minimal Tree (ESMT) or a combination of MST with ESMT, RPSNC is a new min-cost multicast space network coding approach that combines Delaunay triangulation and non-uniform partitioning techniques for generating a number of candidate relay nodes, and then linear programming is applied for choosing the optimal relay nodes and computing their connection links with terminals. Subsequently, an equilibrium method is used to refine the locations of the optimal relay nodes, by moving them to balanced positions. RPSNC can adapt to any density distribution of relay nodes and terminals, as well as any density distribution of terminals. The performance and complexity of RPSNC are analyzed and its performance is validated through simulation experiments.

  2. Compiling global name-space programs for distributed execution

    NASA Technical Reports Server (NTRS)

    Koelbel, Charles; Mehrotra, Piyush

    1990-01-01

    Distributed memory machines do not provide hardware support for a global address space. Thus programmers are forced to partition the data across the memories of the architecture and use explicit message passing to communicate data between processors. The compiler support required to allow programmers to express their algorithms using a global name-space is examined. A general method is presented for analysis of a high level source program and along with its translation to a set of independently executing tasks communicating via messages. If the compiler has enough information, this translation can be carried out at compile-time. Otherwise run-time code is generated to implement the required data movement. The analysis required in both situations is described and the performance of the generated code on the Intel iPSC/2 is presented.

  3. High-order space charge effects using automatic differentiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reusch, Michael F.; Bruhwiler, David L.; Computer Accelerator Physics Conference Williamsburg, Virginia 1996

    1997-02-01

    The Northrop Grumman Topkark code has been upgraded to Fortran 90, making use of operator overloading, so the same code can be used to either track an array of particles or construct a Taylor map representation of the accelerator lattice. We review beam optics and beam dynamics simulations conducted with TOPKARK in the past and we present a new method for modeling space charge forces to high-order with automatic differentiation. This method generates an accurate, high-order, 6-D Taylor map of the phase space variable trajectories for a bunched, high-current beam. The spatial distribution is modeled as the product of amore » Taylor Series times a Gaussian. The variables in the argument of the Gaussian are normalized to the respective second moments of the distribution. This form allows for accurate representation of a wide range of realistic distributions, including any asymmetries, and allows for rapid calculation of the space charge fields with free space boundary conditions. An example problem is presented to illustrate our approach.« less

  4. Fast Exact Search in Hamming Space With Multi-Index Hashing.

    PubMed

    Norouzi, Mohammad; Punjani, Ali; Fleet, David J

    2014-06-01

    There is growing interest in representing image data and feature descriptors using compact binary codes for fast near neighbor search. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used as such, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact k-nearest neighbor search in Hamming space. The approach is storage efficient and straight-forward to implement. Theoretical analysis shows that the algorithm exhibits sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speedups over a linear scan baseline for datasets of up to one billion codes of 64, 128, or 256 bits.

  5. High-order space charge effects using automatic differentiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reusch, M.F.; Bruhwiler, D.L.

    1997-02-01

    The Northrop Grumman Topkark code has been upgraded to Fortran 90, making use of operator overloading, so the same code can be used to either track an array of particles or construct a Taylor map representation of the accelerator lattice. We review beam optics and beam dynamics simulations conducted with TOPKARK in the past and we present a new method for modeling space charge forces to high-order with automatic differentiation. This method generates an accurate, high-order, 6-D Taylor map of the phase space variable trajectories for a bunched, high-current beam. The spatial distribution is modeled as the product of amore » Taylor Series times a Gaussian. The variables in the argument of the Gaussian are normalized to the respective second moments of the distribution. This form allows for accurate representation of a wide range of realistic distributions, including any asymmetries, and allows for rapid calculation of the space charge fields with free space boundary conditions. An example problem is presented to illustrate our approach. {copyright} {ital 1997 American Institute of Physics.}« less

  6. HINCOF-1: a Code for Hail Ingestion in Engine Inlets

    NASA Technical Reports Server (NTRS)

    Gopalaswamy, N.; Murthy, S. N. B.

    1995-01-01

    One of the major concerns during hail ingestion into an engine is the resulting amount and space- and time-wise distribution of hail at the engine face for a given geometry of inlet and set of atmospheric and flight conditions. The appearance of hail in the capture streamtube is invariably random in space and time, with respect to size and momentum. During the motion of a hailstone through an inlet, a hailstone undergoes several processes, namely impact with other hailstones and material surfaces of the inlet and spinner, rolling and rebound following impact; heat and mass transfer; phase change; and shattering, the latter three due to friction and impact. Taking all of these factors into account, a numerical code, designated HINCOF-I, has been developed for determining the motion hailstones from the atmosphere, through an inlet, and up to the engine face. The numerical procedure is based on the Monte-Carlo method. The report presents a description of the code, along with several illustrative cases. The code can be utilized to relate the spinner geometry - conical or, more effective, elliptical - to the possible diversion of hail at the engine face into the bypass stream. The code is also useful for assessing the influence of various hail characteristics on the ingestion and distribution of hailstones over the engine face.

  7. Fast QC-LDPC code for free space optical communication

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Zhang, Qi; Udeh, Chinonso Paschal; Wu, Rangzhong

    2017-02-01

    Free Space Optical (FSO) Communication systems use the atmosphere as a propagation medium. Hence the atmospheric turbulence effects lead to multiplicative noise related with signal intensity. In order to suppress the signal fading induced by multiplicative noise, we propose a fast Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) code for FSO Communication systems. As a linear block code based on sparse matrix, the performances of QC-LDPC is extremely near to the Shannon limit. Currently, the studies on LDPC code in FSO Communications is mainly focused on Gauss-channel and Rayleigh-channel, respectively. In this study, the LDPC code design over atmospheric turbulence channel which is nether Gauss-channel nor Rayleigh-channel is closer to the practical situation. Based on the characteristics of atmospheric channel, which is modeled as logarithmic-normal distribution and K-distribution, we designed a special QC-LDPC code, and deduced the log-likelihood ratio (LLR). An irregular QC-LDPC code for fast coding, of which the rates are variable, is proposed in this paper. The proposed code achieves excellent performance of LDPC codes and can present the characteristics of high efficiency in low rate, stable in high rate and less number of iteration. The result of belief propagation (BP) decoding shows that the bit error rate (BER) obviously reduced as the Signal-to-Noise Ratio (SNR) increased. Therefore, the LDPC channel coding technology can effectively improve the performance of FSO. At the same time, the BER, after decoding reduces with the increase of SNR arbitrarily, and not having error limitation platform phenomenon with error rate slowing down.

  8. Operational advances in ring current modeling using RAM-SCB

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welling, Daniel T; Jordanova, Vania K; Zaharia, Sorin G

    The Ring current Atmosphere interaction Model with Self-Consistently calculated 3D Magnetic field (RAM-SCB) combines a kinetic model of the ring current with a force-balanced model of the magnetospheric magnetic field to create an inner magnetospheric model that is magnetically self consistent. RAM-SCB produces a wealth of outputs that are valuable to space weather applications. For example, the anisotropic particle distribution of the KeV-energy population calculated by the code is key for predicting surface charging on spacecraft. Furthermore, radiation belt codes stand to benefit substantially from RAM-SCB calculated magnetic field values and plasma wave growth rates - both important for determiningmore » the evolution of relativistic electron populations. RAM-SCB is undergoing development to bring these benefits to the space weather community. Data-model validation efforts are underway to assess the performance of the system. 'Virtual Satellite' capability has been added to yield satellite-specific particle distribution and magnetic field output. The code's outer boundary is being expanded to 10 Earth Radii to encompass previously neglected geosynchronous orbits and allow the code to be driven completely by either empirical or first-principles based inputs. These advances are culminating towards a new, real-time version of the code, rtRAM-SCB, that can monitor the inner magnetosphere conditions on both a global and spacecraft-specific level. This paper summarizes these new features as well as the benefits they provide the space weather community.« less

  9. On the Application of Time-Reversed Space-Time Block Code to Aeronautical Telemetry

    DTIC Science & Technology

    2014-06-01

    Keying (SOQPSK), bit error rate (BER), Orthogonal Frequency Division Multiplexing ( OFDM ), Generalized time-reversed space-time block codes (GTR-STBC) 16...Alamouti code [4]) is optimum [2]. Although OFDM is generally applied on a per subcarrier basis in frequency selective fading, it is not a viable...Calderbank, “Finite-length MIMO decision feedback equal- ization for space-time block-coded signals over multipath-fading channels,” IEEE Transac- tions on

  10. Time Synchronization and Distribution Mechanisms for Space Networks

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Gao, Jay L.; Clare, Loren P.; Mills, David L.

    2011-01-01

    This work discusses research on the problems of synchronizing and distributing time information between spacecraft based on the Network Time Protocol (NTP), where NTP is a standard time synchronization protocol widely used in the terrestrial network. The Proximity-1 Space Link Interleaved Time Synchronization (PITS) Protocol was designed and developed for synchronizing spacecraft that are in proximity where proximity is less than 100,000 km distant. A particular application is synchronization between a Mars orbiter and rover. Lunar scenarios as well as outer-planet deep space mother-ship-probe missions may also apply. Spacecraft with more accurate time information functions as a time-server, and the other spacecraft functions as a time-client. PITS can be easily integrated and adaptable to the CCSDS Proximity-1 Space Link Protocol with minor modifications. In particular, PITS can take advantage of the timestamping strategy that underlying link layer functionality provides for accurate time offset calculation. The PITS algorithm achieves time synchronization with eight consecutive space network time packet exchanges between two spacecraft. PITS can detect and avoid possible errors from receiving duplicate and out-of-order packets by comparing with the current state variables and timestamps. Further, PITS is able to detect error events and autonomously recover from unexpected events that can possibly occur during the time synchronization and distribution process. This capability achieves an additional level of protocol protection on top of CRC or Error Correction Codes. PITS is a lightweight and efficient protocol, eliminating the needs for explicit frame sequence number and long buffer storage. The PITS protocol is capable of providing time synchronization and distribution services for a more general domain where multiple entities need to achieve time synchronization using a single point-to-point link.

  11. Modular space vehicle boards, control software, reprogramming, and failure recovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judd, Stephen; Dallmann, Nicholas; McCabe, Kevin

    A space vehicle may have a modular board configuration that commonly uses some or all components and a common operating system for at least some of the boards. Each modular board may have its own dedicated processing, and processing loads may be distributed. The space vehicle may be reprogrammable, and may be launched without code that enables all functionality and/or components. Code errors may be detected and the space vehicle may be reset to a working code version to prevent system failure.

  12. A 2D electrostatic PIC code for the Mark III Hypercube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferraro, R.D.; Liewer, P.C.; Decyk, V.K.

    We have implemented a 2D electrostastic plasma particle in cell (PIC) simulation code on the Caltech/JPL Mark IIIfp Hypercube. The code simulates plasma effects by evolving in time the trajectories of thousands to millions of charged particles subject to their self-consistent fields. Each particle`s position and velocity is advanced in time using a leap frog method for integrating Newton`s equations of motion in electric and magnetic fields. The electric field due to these moving charged particles is calculated on a spatial grid at each time by solving Poisson`s equation in Fourier space. These two tasks represent the largest part ofmore » the computation. To obtain efficient operation on a distributed memory parallel computer, we are using the General Concurrent PIC (GCPIC) algorithm previously developed for a 1D parallel PIC code.« less

  13. Additional extensions to the NASCAP computer code, volume 1

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Katz, I.; Stannard, P. R.

    1981-01-01

    Extensions and revisions to a computer code that comprehensively analyzes problems of spacecraft charging (NASCAP) are documented. Using a fully three dimensional approach, it can accurately predict spacecraft potentials under a variety of conditions. Among the extensions are a multiple electron/ion gun test tank capability, and the ability to model anisotropic and time dependent space environments. Also documented are a greatly extended MATCHG program and the preliminary version of NASCAP/LEO. The interactive MATCHG code was developed into an extremely powerful tool for the study of material-environment interactions. The NASCAP/LEO, a three dimensional code to study current collection under conditions of high voltages and short Debye lengths, was distributed for preliminary testing.

  14. Novel space-time trellis codes for free-space optical communications using transmit laser selection.

    PubMed

    García-Zambrana, Antonio; Boluda-Ruiz, Rubén; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz

    2015-09-21

    In this paper, the deployment of novel space-time trellis codes (STTCs) with transmit laser selection (TLS) for free-space optical (FSO) communication systems using intensity modulation and direct detection (IM/DD) over atmospheric turbulence and misalignment fading channels is presented. Combining TLS and STTC with rate 1 bit/(s · Hz), a new code design criterion based on the use of the largest order statistics is here proposed for multiple-input/single-output (MISO) FSO systems in order to improve the diversity order gain by properly chosing the transmit lasers out of the available L lasers. Based on a pairwise error probability (PEP) analysis, closed-form asymptotic bit error-rate (BER) expressions in the range from low to high signal-to-noise ratio (SNR) are derived when the irradiance of the transmitted optical beam is susceptible to moderate-to-strong turbulence conditions, following a gamma-gamma (GG) distribution, and pointing error effects, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. Obtained results show diversity orders of 2L and 3L when simple two-state and four-state STTCs are considered, respectively. Simulation results are further demonstrated to confirm the analytical results.

  15. Numerical investigations of low-density nozzle flow by solving the Boltzmann equation

    NASA Technical Reports Server (NTRS)

    Deng, Zheng-Tao; Liaw, Goang-Shin; Chou, Lynn Chen

    1995-01-01

    A two-dimensional finite-difference code to solve the BGK-Boltzmann equation has been developed. The solution procedure consists of three steps: (1) transforming the BGK-Boltzmann equation into two simultaneous partial differential equations by taking moments of the distribution function with respect to the molecular velocity u(sub z), with weighting factors 1 and u(sub z)(sup 2); (2) solving the transformed equations in the physical space based on the time-marching technique and the four-stage Runge-Kutta time integration, for a given discrete-ordinate. The Roe's second-order upwind difference scheme is used to discretize the convective terms and the collision terms are treated as source terms; and (3) using the newly calculated distribution functions at each point in the physical space to calculate the macroscopic flow parameters by the modified Gaussian quadrature formula. Repeating steps 2 and 3, the time-marching procedure stops when the convergent criteria is reached. A low-density nozzle flow field has been calculated by this newly developed code. The BGK Boltzmann solution and experimental data show excellent agreement. It demonstrated that numerical solutions of the BGK-Boltzmann equation are ready to be experimentally validated.

  16. Galactic Cosmic Ray Event-Based Risk Model (GERM) Code

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.

    2013-01-01

    This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at the NASA Space Radiation Laboratory (NSRL) for the purpose of simulating space radiation biological effects. In the first option, properties of monoenergetic beams are treated. In the second option, the transport of beams in different materials is treated. Similar biophysical properties as in the first option are evaluated for the primary ion and its secondary particles. Additional properties related to the nuclear fragmentation of the beam are evaluated. The GERM code is a computationally efficient Monte-Carlo heavy-ion-beam model. It includes accurate models of LET, range, residual energy, and straggling, and the quantum multiple scattering fragmentation (QMSGRG) nuclear database.

  17. Operational Advances in Ring Current Modeling Using RAM-SCB

    NASA Astrophysics Data System (ADS)

    Morley, S.; Welling, D. T.; Zaharia, S. G.; Jordanova, V. K.

    2010-12-01

    The Ring current Atmosphere interaction Model with Self-Consistently calculated 3D Magnetic field (RAM-SCB) combines a kinetic model of the ring current with a force-balanced model of the magnetospheric magnetic field to create an inner magnetospheric model that is magnetically self consistent. RAM-SCB produces a wealth of outputs that are valuable to space weather applications. For example, the anisotropic particle distribution of the KeV-energy population calculated by the code is key for predicting surface charging on spacecraft. Furthermore, radiation belt codes stand to benefit substantially from RAM-SCB calculated magnetic field values and plasma wave growth rates - both important for determining the evolution of relativistic electron populations. RAM-SCB is undergoing development to bring these benefits to the space weather community. Data-model validation efforts are underway to assess the performance of the system. “Virtual Satellite” capability has been added to yield satellite-specific particle distribution and magnetic field output. The code’s outer boundary is being expanded to 10 Earth Radii to encompass previously neglected geosynchronous orbits and allow the code to be driven completely by either empirical or first-principles based inputs. These advances are culminating towards a new, real-time version of the code, rtRAM-SCB, that can monitor the inner magnetosphere conditions on both a global and spacecraft-specific level. This paper summarizes these new features as well as the benefits they provide the space weather community.

  18. An Improved Neutron Transport Algorithm for HZETRN2006

    NASA Astrophysics Data System (ADS)

    Slaba, Tony

    NASA's new space exploration initiative includes plans for long term human presence in space thereby placing new emphasis on space radiation analyses. In particular, a systematic effort of verification, validation and uncertainty quantification of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. In this paper, the numerical error associated with energy discretization in HZETRN2006 is addressed; large errors in the low-energy portion of the neutron fluence spectrum are produced due to a numerical truncation error in the transport algorithm. It is shown that the truncation error results from the narrow energy domain of the neutron elastic spectral distributions, and that an extremely fine energy grid is required in order to adequately resolve the problem under the current formulation. Since adding a sufficient number of energy points will render the code computationally inefficient, we revisit the light-ion transport theory developed for HZETRN2006 and focus on neutron elastic interactions. The new approach that is developed numerically integrates with adequate resolution in the energy domain without affecting the run-time of the code and is easily incorporated into the current code. Efforts were also made to optimize the computational efficiency of the light-ion propagator; a brief discussion of the efforts is given along with run-time comparisons between the original and updated codes. Convergence testing is then completed by running the code for various environments and shielding materials with many different energy grids to ensure stability of the proposed method.

  19. Robust Modulo Remaindering and Applications in Radar and Sensor Signal Processing

    DTIC Science & Technology

    2015-08-27

    Chinese Remainder Theorem in FDD Systems, Science China -- Information Sciences, vol.55, no.7, pp. 1605 -1616, July 2012. 3) Y. Liu, X.-G. Xia, and H. L...Sciences, vol.55, no.7, pp. 1605 -1616, July 2012. 3) Y. Liu, X.-G. Xia, and H. L. Zhang, Distributed Space-Time Coding for Full-DuplexAsynchronous

  20. BODYFIT-1FE: a computer code for three-dimensional steady-state/transient single-phase rod-bundle thermal-hydraulic analysis. Draft report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, B.C.J.; Sha, W.T.; Doria, M.L.

    1980-11-01

    The governing equations, i.e., conservation equations for mass, momentum, and energy, are solved as a boundary-value problem in space and an initial-value problem in time. BODYFIT-1FE code uses the technique of boundary-fitted coordinate systems where all the physical boundaries are transformed to be coincident with constant coordinate lines in the transformed space. By using this technique, one can prescribe boundary conditions accurately without interpolation. The transformed governing equations in terms of the boundary-fitted coordinates are then solved by using implicit cell-by-cell procedure with a choice of either central or upwind convective derivatives. It is a true benchmark rod-bundle code withoutmore » invoking any assumptions in the case of laminar flow. However, for turbulent flow, some empiricism must be employed due to the closure problem of turbulence modeling. The detailed velocity and temperature distributions calculated from the code can be used to benchmark and calibrate empirical coefficients employed in subchannel codes and porous-medium analyses.« less

  1. Simulating Coupling Complexity in Space Plasmas: First Results from a new code

    NASA Astrophysics Data System (ADS)

    Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.

    2005-12-01

    The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal mass ejection and interplanetary shock propagation model for the inner and outer heliosphere, including, at a test-particle level, wave-particle interactions and particle acceleration at traveling shock waves and compression regions. 3) To develop an advanced Geospace General Circulation Model (GGCM) capable of realistically modeling space weather events, in particular the interaction with CMEs and geomagnetic storms. Furthermore, by implementing scalable run-time supports and sophisticated off- and on-line prediction algorithms, we anticipate important advances in the development of automatic and intelligent system software to optimize a wide variety of 'embedded' computations on parallel computers. Finally, public domain MHD and hydrodynamic codes had a transforming effect on space and astrophysics. We expect that our new generation, open source, public domain multi-scale code will have a similar transformational effect in a variety of disciplines, opening up new classes of problems to physicists and engineers alike.

  2. Transverse Phase Space Reconstruction and Emittance Measurement of Intense Electron Beams using a Tomography Technique

    NASA Astrophysics Data System (ADS)

    Stratakis, D.; Kishek, R. A.; Li, H.; Bernal, S.; Walter, M.; Tobin, J.; Quinn, B.; Reiser, M.; O'Shea, P. G.

    2006-11-01

    Tomography is the technique of reconstructing an image from its projections. It is widely used in the medical community to observe the interior of the human body by processing multiple x-ray images taken at different angles, A few pioneering researchers have adapted tomography to reconstruct detailed phase space maps of charged particle beams. Some questions arise regarding the limitations of tomography technique for space charge dominated beams. For instance is the linear space charge force a valid approximation? Does tomography equally reproduce phase space for complex, experimentally observed, initial particle distributions? Does tomography make any assumptions about the initial distribution? This study explores the use of accurate modeling with the particle-in-cell code WARP to address these questions, using a wide range of different initial distributions in the code. The study also includes a number of experimental results on tomographic phase space mapping performed on the University of Maryland Electron Ring (UMER).

  3. Self-consistent simulation of radio frequency multipactor on micro-grooved dielectric surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Libing; Wang, Jianguo, E-mail: wanguiuc@mail.xjtu.edu.cn; Northwest Institute of Nuclear Technology, Xi'an, Shaanxi 710024

    2015-02-07

    The multipactor plays a key role in the surface breakdown on the feed dielectric window irradiated by high power microwave. To study the suppression of multipactor, a 2D electrostatic PIC-MCC simulation code was developed. The space charge field, including surface deposited charge and multipactor electron charge field, is obtained by solving 2D Poisson's equation in time. Therefore, the simulation is self-consistent and does not require presetting a fixed space charge field. By using this code, the self-consistent simulation of the RF multipactor on the periodic micro-grooved dielectric surface is realized. The 2D space distributions of the multipactor electrons and spacemore » charge field are presented. From the simulation results, it can be found that only half slopes have multipactor discharge when the slope angle exceeds a certain value, and the groove presents a pronounced suppression effect on the multipactor.« less

  4. Unitals and ovals of symmetric block designs in LDPC and space-time coding

    NASA Astrophysics Data System (ADS)

    Andriamanalimanana, Bruno R.

    2004-08-01

    An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.

  5. The impacts of marijuana dispensary density and neighborhood ecology on marijuana abuse and dependence

    PubMed Central

    Mair, Christina; Freisthler, Bridget; Ponicki, William R.; Gaidus, Andrew

    2015-01-01

    Background As an increasing number of states liberalize cannabis use and develop laws and local policies, it is essential to better understand the impacts of neighborhood ecology and marijuana dispensary density on marijuana use, abuse, and dependence. We investigated associations between marijuana abuse/dependence hospitalizations and community demographic and environmental conditions from 2001–2012 in California, as well as cross-sectional associations between local and adjacent marijuana dispensary densities and marijuana hospitalizations. Methods We analyzed panel population data relating hospitalizations coded for marijuana abuse or dependence and assigned to residential ZIP codes in California from 2001 through 2012 (20,219 space-time units) to ZIP code demographic and ecological characteristics. Bayesian space-time misalignment models were used to account for spatial variations in geographic unit definitions over time, while also accounting for spatial autocorrelation using conditional autoregressive priors. We also analyzed cross-sectional associations between marijuana abuse/dependence and the density of dispensaries in local and spatially adjacent ZIP codes in 2012. Results An additional one dispensary per square mile in a ZIP code was cross-sectionally associated with a 6.8% increase in the number of marijuana hospitalizations (95% credible interval 1.033, 1.105) with a marijuana abuse/dependence code. Other local characteristics, such as the median household income and age and racial/ethnic distributions, were associated with marijuana hospitalizations in cross-sectional and panel analyses. Conclusions Prevention and intervention programs for marijuana abuse and dependence may be particularly essential in areas of concentrated disadvantage. Policy makers may want to consider regulations that limit the density of dispensaries. PMID:26154479

  6. Numerical simulation of electromagnetic waves in Schwarzschild space-time by finite difference time domain method and Green function method

    NASA Astrophysics Data System (ADS)

    Jia, Shouqing; La, Dongsheng; Ma, Xuelian

    2018-04-01

    The finite difference time domain (FDTD) algorithm and Green function algorithm are implemented into the numerical simulation of electromagnetic waves in Schwarzschild space-time. FDTD method in curved space-time is developed by filling the flat space-time with an equivalent medium. Green function in curved space-time is obtained by solving transport equations. Simulation results validate both the FDTD code and Green function code. The methods developed in this paper offer a tool to solve electromagnetic scattering problems.

  7. TEAMBLOCKS: HYBRID ABSTRACTIONS FOR PROVABLE MULTI-AGENT AUTONOMY

    DTIC Science & Technology

    2017-07-28

    Raspberry Pi 23 can easily satisfy control loop periods on the order of 10−3s. Thus, we assume that the time to execute a piece of control code, ∆t...Release; Distribution Unlimited. 23 State Space The state space of the aircraft is X = R3 × SO(3) × R3 × R3; states consist of • pI ∈ R3, the...the tbdemo_ghaexecution script uses this operation to make feedback system out of the product of the linear system and PI controller. tbread

  8. Comparison of heavy-ion transport simulations: Collision integral in a box

    NASA Astrophysics Data System (ADS)

    Zhang, Ying-Xun; Wang, Yong-Jia; Colonna, Maria; Danielewicz, Pawel; Ono, Akira; Tsang, Manyee Betty; Wolter, Hermann; Xu, Jun; Chen, Lie-Wen; Cozma, Dan; Feng, Zhao-Qing; Das Gupta, Subal; Ikeno, Natsumi; Ko, Che-Ming; Li, Bao-An; Li, Qing-Feng; Li, Zhu-Xia; Mallik, Swagata; Nara, Yasushi; Ogawa, Tatsuhiko; Ohnishi, Akira; Oliinychenko, Dmytro; Papa, Massimo; Petersen, Hannah; Su, Jun; Song, Taesoo; Weil, Janus; Wang, Ning; Zhang, Feng-Shou; Zhang, Zhen

    2018-03-01

    Simulations by transport codes are indispensable to extract valuable physical information from heavy-ion collisions. In order to understand the origins of discrepancies among different widely used transport codes, we compare 15 such codes under controlled conditions of a system confined to a box with periodic boundary, initialized with Fermi-Dirac distributions at saturation density and temperatures of either 0 or 5 MeV. In such calculations, one is able to check separately the different ingredients of a transport code. In this second publication of the code evaluation project, we only consider the two-body collision term; i.e., we perform cascade calculations. When the Pauli blocking is artificially suppressed, the collision rates are found to be consistent for most codes (to within 1 % or better) with analytical results, or completely controlled results of a basic cascade code. In orderto reach that goal, it was necessary to eliminate correlations within the same pair of colliding particles that can be present depending on the adopted collision prescription. In calculations with active Pauli blocking, the blocking probability was found to deviate from the expected reference values. The reason is found in substantial phase-space fluctuations and smearing tied to numerical algorithms and model assumptions in the representation of phase space. This results in the reduction of the blocking probability in most transport codes, so that the simulated system gradually evolves away from the Fermi-Dirac toward a Boltzmann distribution. Since the numerical fluctuations are weaker in the Boltzmann-Uehling-Uhlenbeck codes, the Fermi-Dirac statistics is maintained there for a longer time than in the quantum molecular dynamics codes. As a result of this investigation, we are able to make judgements about the most effective strategies in transport simulations for determining the collision probabilities and the Pauli blocking. Investigation in a similar vein of other ingredients in transport calculations, like the mean-field propagation or the production of nucleon resonances and mesons, will be discussed in the future publications.

  9. Fully kinetic 3D simulations of the Hermean magnetosphere under realistic conditions: a new approach

    NASA Astrophysics Data System (ADS)

    Amaya, Jorge; Gonzalez-Herrero, Diego; Lembège, Bertrand; Lapenta, Giovanni

    2017-04-01

    Simulations of the magnetosphere of planets are usually performed using the MHD and the hybrid approaches. However, these two methods still rely on approximations for the computation of the pressure tensor, and require the neutrality of the plasma at every point of the domain by construction. These approximations undermine the role of electrons on the emergence of plasma features in the magnetosphere of planets. The high mobility of electrons, their characteristic time and space scales, and the lack of perfect neutrality, are the source of many observed phenomena in the magnetospheres, including the turbulence energy cascade, the magnetic reconnection, the particle acceleration in the shock front and the formation of current systems around the magnetosphere. Fully kinetic codes are extremely demanding of computing time, and have been unable to perform simulations of the full magnetosphere at the real scales of a planet with realistic plasma conditions. This is caused by two main reasons: 1) explicit codes must resolve the electron scales limiting the time and space discretisation, and 2) current versions of semi-implicit codes are unstable for cell sizes larger than a few Debye lengths. In this work we present new simulations performed with ECsim, an Energy Conserving semi-implicit method [1], that can overcome these two barriers. We compare the solutions obtained with ECsim with the solutions obtained by the classic semi-implicit code iPic3D [2]. The new simulations with ECsim demand a larger computational effort, but the time and space discretisations are larger than those in iPic3D allowing for a faster simulation time of the full planetary environment. The new code, ECsim, can reach a resolution allowing the capture of significant large scale physics without loosing kinetic electron information, such as wave-electron interaction and non-Maxwellian electron velocity distributions [3]. The code is able to better capture the thickness of the different boundary layers of the magnetosphere of Mercury. Electron kinetics are consistent with the spatial and temporal scale resolutions. Simulations are compared with measurements from the MESSENGER spacecraft showing a better fit when compared against the classic fully kinetic code iPic3D. These results show that the new generation of Energy Conserving semi-implicit codes can be used for an accurate analysis and interpretation of particle data from magnetospheric missions like BepiColombo and MMS, including electron velocity distributions and electron temperature anisotropies. [1] Lapenta, G. (2016). Exactly Energy Conserving Implicit Moment Particle in Cell Formulation. arXiv preprint arXiv:1602.06326. [2] Markidis, S., & Lapenta, G. (2010). Multi-scale simulations of plasma with iPIC3D. Mathematics and Computers in Simulation, 80(7), 1509-1519. [3] Lapenta, G., Gonzalez-Herrero, D., & Boella, E. (2016). Multiple scale kinetic simulations with the energy conserving semi implicit particle in cell (PIC) method. arXiv preprint arXiv:1612.08289.

  10. Area, speed and power measurements of FPGA-based complex orthogonal space-time block code channel encoders

    NASA Astrophysics Data System (ADS)

    Passas, Georgios; Freear, Steven; Fawcett, Darren

    2010-01-01

    Space-time coding (STC) is an important milestone in modern wireless communications. In this technique, more copies of the same signal are transmitted through different antennas (space) and different symbol periods (time), to improve the robustness of a wireless system by increasing its diversity gain. STCs are channel coding algorithms that can be readily implemented on a field programmable gate array (FPGA) device. This work provides some figures for the amount of required FPGA hardware resources, the speed that the algorithms can operate and the power consumption requirements of a space-time block code (STBC) encoder. Seven encoder very high-speed integrated circuit hardware description language (VHDL) designs have been coded, synthesised and tested. Each design realises a complex orthogonal space-time block code with a different transmission matrix. All VHDL designs are parameterisable in terms of sample precision. Precisions ranging from 4 bits to 32 bits have been synthesised. Alamouti's STBC encoder design [Alamouti, S.M. (1998), 'A Simple Transmit Diversity Technique for Wireless Communications', IEEE Journal on Selected Areas in Communications, 16:55-108.] proved to be the best trade-off, since it is on average 3.2 times smaller, 1.5 times faster and requires slightly less power than the next best trade-off in the comparison, which is a 3/4-rate full-diversity 3Tx-antenna STBC.

  11. A distributed code for color in natural scenes derived from center-surround filtered cone signals

    PubMed Central

    Kellner, Christian J.; Wachtler, Thomas

    2013-01-01

    In the retina of trichromatic primates, chromatic information is encoded in an opponent fashion and transmitted to the lateral geniculate nucleus (LGN) and visual cortex via parallel pathways. Chromatic selectivities of neurons in the LGN form two separate clusters, corresponding to two classes of cone opponency. In the visual cortex, however, the chromatic selectivities are more distributed, which is in accordance with a population code for color. Previous studies of cone signals in natural scenes typically found opponent codes with chromatic selectivities corresponding to two directions in color space. Here we investigated how the non-linear spatio-chromatic filtering in the retina influences the encoding of color signals. Cone signals were derived from hyper-spectral images of natural scenes and preprocessed by center-surround filtering and rectification, resulting in parallel ON and OFF channels. Independent Component Analysis (ICA) on these signals yielded a highly sparse code with basis functions that showed spatio-chromatic selectivities. In contrast to previous analyses of linear transformations of cone signals, chromatic selectivities were not restricted to two main chromatic axes, but were more continuously distributed in color space, similar to the population code of color in the early visual cortex. Our results indicate that spatio-chromatic processing in the retina leads to a more distributed and more efficient code for natural scenes. PMID:24098289

  12. Diversity-optimal power loading for intensity modulated MIMO optical wireless communications.

    PubMed

    Zhang, Yan-Yu; Yu, Hong-Yi; Zhang, Jian-Kang; Zhu, Yi-Jun

    2016-04-18

    In this paper, we consider the design of space code for an intensity modulated direct detection multi-input-multi-output optical wireless communication (IM/DD MIMO-OWC) system, in which channel coefficients are independent and non-identically log-normal distributed, with variances and means known at the transmitter and channel state information available at the receiver. Utilizing the existing space code design criterion for IM/DD MIMO-OWC with a maximum likelihood (ML) detector, we design a diversity-optimal space code (DOSC) that maximizes both large-scale diversity and small-scale diversity gains and prove that the spatial repetition code (RC) with a diversity-optimized power allocation is diversity-optimal among all the high dimensional nonnegative space code schemes under a commonly used optical power constraint. In addition, we show that one of significant advantages of the DOSC is to allow low-complexity ML detection. Simulation results indicate that in high signal-to-noise ratio (SNR) regimes, our proposed DOSC significantly outperforms RC, which is the best space code currently available for such system.

  13. Modeling and simulation of RF photoinjectors for coherent light sources

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Krasilnikov, M.; Stephan, F.; Gjonaj, E.; Weiland, T.; Dohlus, M.

    2018-05-01

    We propose a three-dimensional fully electromagnetic numerical approach for the simulation of RF photoinjectors for coherent light sources. The basic idea consists in incorporating a self-consistent photoemission model within a particle tracking code. The generation of electron beams in the injector is determined by the quantum efficiency (QE) of the cathode, the intensity profile of the driving laser as well as by the accelerating field and magnetic focusing conditions in the gun. The total charge emitted during an emission cycle can be limited by the space charge field at the cathode. Furthermore, the time and space dependent electromagnetic field at the cathode may induce a transient modulation of the QE due to surface barrier reduction of the emitting layer. In our modeling approach, all these effects are taken into account. The beam particles are generated dynamically according to the local QE of the cathode and the time dependent laser intensity profile. For the beam dynamics, a tracking code based on the Lienard-Wiechert retarded field formalism is employed. This code provides the single particle trajectories as well as the transient space charge field distribution at the cathode. As an application, the PITZ injector is considered. Extensive electron bunch emission simulations are carried out for different operation conditions of the injector, in the source limited as well as in the space charge limited emission regime. In both cases, fairly good agreement between measurements and simulations is obtained.

  14. A new free and open source tool for space plasma modeling.

    NASA Astrophysics Data System (ADS)

    Honkonen, I. J.

    2014-12-01

    I will present a new distributed memory parallel, free and open source computational model for studying space plasma. The model is written in C++ with emphasis on good software development practices and code readability without sacrificing serial or parallel performance. As such the model could be especially useful for education, for learning both (magneto)hydrodynamics (MHD) and computational model development. By using latest features of the C++ standard (2011) it has been possible to develop a very modular program which improves not only the readability of code but also the testability of the model and decreases the effort required to make changes to various parts of the program. Major parts of the model, functionality not directly related to (M)HD, have been outsourced to other freely available libraries which has reduced the development time of the model significantly. I will present an overview of the code architecture as well as details of different parts of the model and will show examples of using the model including preparing input files and plotting results. A multitude of 1-, 2- and 3-dimensional test cases are included in the software distribution and the results of, for example, Kelvin-Helmholtz, bow shock, blast wave and reconnection tests, will be presented.

  15. Colour cyclic code for Brillouin distributed sensors

    NASA Astrophysics Data System (ADS)

    Le Floch, Sébastien; Sauser, Florian; Llera, Miguel; Rochat, Etienne

    2015-09-01

    For the first time, a colour cyclic coding (CCC) is theoretically and experimentally demonstrated for Brillouin optical time-domain analysis (BOTDA) distributed sensors. Compared to traditional intensity-modulated cyclic codes, the code presents an additional gain of √2 while keeping the same number of sequences as for a colour coding. A comparison with a standard BOTDA sensor is realized and validates the theoretical coding gain.

  16. The impacts of marijuana dispensary density and neighborhood ecology on marijuana abuse and dependence.

    PubMed

    Mair, Christina; Freisthler, Bridget; Ponicki, William R; Gaidus, Andrew

    2015-09-01

    As an increasing number of states liberalize cannabis use and develop laws and local policies, it is essential to better understand the impacts of neighborhood ecology and marijuana dispensary density on marijuana use, abuse, and dependence. We investigated associations between marijuana abuse/dependence hospitalizations and community demographic and environmental conditions from 2001 to 2012 in California, as well as cross-sectional associations between local and adjacent marijuana dispensary densities and marijuana hospitalizations. We analyzed panel population data relating hospitalizations coded for marijuana abuse or dependence and assigned to residential ZIP codes in California from 2001 through 2012 (20,219 space-time units) to ZIP code demographic and ecological characteristics. Bayesian space-time misalignment models were used to account for spatial variations in geographic unit definitions over time, while also accounting for spatial autocorrelation using conditional autoregressive priors. We also analyzed cross-sectional associations between marijuana abuse/dependence and the density of dispensaries in local and spatially adjacent ZIP codes in 2012. An additional one dispensary per square mile in a ZIP code was cross-sectionally associated with a 6.8% increase in the number of marijuana hospitalizations (95% credible interval 1.033, 1.105) with a marijuana abuse/dependence code. Other local characteristics, such as the median household income and age and racial/ethnic distributions, were associated with marijuana hospitalizations in cross-sectional and panel analyses. Prevention and intervention programs for marijuana abuse and dependence may be particularly essential in areas of concentrated disadvantage. Policy makers may want to consider regulations that limit the density of dispensaries. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pang, Xiaoying; Rybarcyk, Larry

    HPSim is a GPU-accelerated online multi-particle beam dynamics simulation tool for ion linacs. It was originally developed for use on the Los Alamos 800-MeV proton linac. It is a “z-code” that contains typical linac beam transport elements. The linac RF-gap transformation utilizes transit-time-factors to calculate the beam acceleration therein. The space-charge effects are computed using the 2D SCHEFF (Space CHarge EFFect) algorithm, which calculates the radial and longitudinal space charge forces for cylindrically symmetric beam distributions. Other space- charge routines to be incorporated include the 3D PICNIC and a 3D Poisson solver. HPSim can simulate beam dynamics in drift tubemore » linacs (DTLs) and coupled cavity linacs (CCLs). Elliptical superconducting cavity (SC) structures will also be incorporated into the code. The computational core of the code is written in C++ and accelerated using the NVIDIA CUDA technology. Users access the core code, which is wrapped in Python/C APIs, via Pythons scripts that enable ease-of-use and automation of the simulations. The overall linac description including the EPICS PV machine control parameters is kept in an SQLite database that also contains calibration and conversion factors required to transform the machine set points into model values used in the simulation.« less

  18. Utilizing Spectrum Efficiently (USE)

    DTIC Science & Technology

    2011-02-28

    18 4.8 Space-Time Coded Asynchronous DS - CDMA with Decentralized MAI Suppression: Performance and...numerical results. 4.8 Space-Time Coded Asynchronous DS - CDMA with Decentralized MAI Suppression: Performance and Spectral Efficiency In [60] multiple...supported at a given signal-to-interference ratio in asynchronous direct-sequence code-division multiple-access ( DS - CDMA ) sys- tems was examined. It was

  19. A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma

    DOE PAGES

    Ku, S.; Hager, R.; Chang, C. S.; ...

    2016-04-01

    In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less

  20. A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ku, S.; Hager, R.; Chang, C. S.

    In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less

  1. A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ku, S., E-mail: sku@pppl.gov; Hager, R.; Chang, C.S.

    In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. The numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less

  2. An object oriented code for simulating supersymmetric Yang-Mills theories

    NASA Astrophysics Data System (ADS)

    Catterall, Simon; Joseph, Anosh

    2012-06-01

    We present SUSY_LATTICE - a C++ program that can be used to simulate certain classes of supersymmetric Yang-Mills (SYM) theories, including the well known N=4 SYM in four dimensions, on a flat Euclidean space-time lattice. Discretization of SYM theories is an old problem in lattice field theory. It has resisted solution until recently when new ideas drawn from orbifold constructions and topological field theories have been brought to bear on the question. The result has been the creation of a new class of lattice gauge theories in which the lattice action is invariant under one or more supersymmetries. The resultant theories are local, free of doublers and also possess exact gauge-invariance. In principle they form the basis for a truly non-perturbative definition of the continuum SYM theories. In the continuum limit they reproduce versions of the SYM theories formulated in terms of twisted fields, which on a flat space-time is just a change of the field variables. In this paper, we briefly review these ideas and then go on to provide the details of the C++ code. We sketch the design of the code, with particular emphasis being placed on SYM theories with N=(2,2) in two dimensions and N=4 in three and four dimensions, making one-to-one comparisons between the essential components of the SYM theories and their corresponding counterparts appearing in the simulation code. The code may be used to compute several quantities associated with the SYM theories such as the Polyakov loop, mean energy, and the width of the scalar eigenvalue distributions. Program summaryProgram title: SUSY_LATTICE Catalogue identifier: AELS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9315 No. of bytes in distributed program, including test data, etc.: 95 371 Distribution format: tar.gz Programming language: C++ Computer: PCs and Workstations Operating system: Any, tested on Linux machines Classification:: 11.6 Nature of problem: To compute some of the observables of supersymmetric Yang-Mills theories such as supersymmetric action, Polyakov/Wilson loops, scalar eigenvalues and Pfaffian phases. Solution method: We use the Rational Hybrid Monte Carlo algorithm followed by a Leapfrog evolution and a Metropolis test. The input parameters of the model are read in from a parameter file. Restrictions: This code applies only to supersymmetric gauge theories with extended supersymmetry, which undergo the process of maximal twisting. (See Section 2 of the manuscript for details.) Running time: From a few minutes to several hours depending on the amount of statistics needed.

  3. System performance predictions for Space Station Freedom's electric power system

    NASA Technical Reports Server (NTRS)

    Kerslake, Thomas W.; Hojnicki, Jeffrey S.; Green, Robert D.; Follo, Jeffrey C.

    1993-01-01

    Space Station Freedom Electric Power System (EPS) capability to effectively deliver power to housekeeping and user loads continues to strongly influence Freedom's design and planned approaches for assembly and operations. The EPS design consists of silicon photovoltaic (PV) arrays, nickel-hydrogen batteries, and direct current power management and distribution hardware and cabling. To properly characterize the inherent EPS design capability, detailed system performance analyses must be performed for early stages as well as for the fully assembled station up to 15 years after beginning of life. Such analyses were repeatedly performed using the FORTRAN code SPACE (Station Power Analysis for Capability Evaluation) developed at the NASA Lewis Research Center over a 10-year period. SPACE combines orbital mechanics routines, station orientation/pointing routines, PV array and battery performance models, and a distribution system load-flow analysis to predict EPS performance. Time-dependent, performance degradation, low earth orbit environmental interactions, and EPS architecture build-up are incorporated in SPACE. Results from two typical SPACE analytical cases are presented: (1) an electric load driven case and (2) a maximum EPS capability case.

  4. FEWZ 2.0: A code for hadronic Z production at next-to-next-to-leading order

    NASA Astrophysics Data System (ADS)

    Gavin, Ryan; Li, Ye; Petriello, Frank; Quackenbush, Seth

    2011-11-01

    We introduce an improved version of the simulation code FEWZ ( Fully Exclusive W and Z Production) for hadron collider production of lepton pairs through the Drell-Yan process at next-to-next-to-leading order (NNLO) in the strong coupling constant. The program is fully differential in the phase space of leptons and additional hadronic radiation. The new version offers users significantly more options for customization. FEWZ now bins multiple, user-selectable histograms during a single run, and produces parton distribution function (PDF) errors automatically. It also features a significantly improved integration routine, and can take advantage of multiple processor cores locally or on the Condor distributed computing system. We illustrate the new features of FEWZ by presenting numerous phenomenological results for LHC physics. We compare NNLO QCD with initial ATLAS and CMS results, and discuss in detail the effects of detector acceptance on the measurement of angular quantities associated with Z-boson production. We address the issue of technical precision in the presence of severe phase-space cuts. Program summaryProgram title: FEWZ Catalogue identifier: AEJP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6 280 771 No. of bytes in distributed program, including test data, etc.: 173 027 645 Distribution format: tar.gz Programming language: Fortran 77, C++, Python Computer: Mac, PC Operating system: Mac OSX, Unix/Linux Has the code been vectorized or parallelized?: Yes. User-selectable, 1 to 219 RAM: 200 Mbytes for common parton distribution functions Classification: 11.1 External routines: CUBA numerical integration library, numerous parton distribution sets (see text); these are provided with the code. Nature of problem: Determination of the Drell-Yan Z/photon production cross section and decay into leptons, with kinematic distributions of leptons and jets including full spin correlations, at next-to-next-to-leading order in the strong coupling constant. Solution method: Virtual loop integrals are decomposed into master integrals using automated techniques. Singularities are extracted from real radiation terms via sector decomposition, which separates singularities and maps onto suitable phase space variables. Result is convoluted with parton distribution functions. Each piece is numerically integrated over phase space, which allows arbitrary cuts on the observed particles. Each sample point may be binned during numerical integration, providing histograms, and reweighted by parton distribution function error eigenvectors, which provides PDF errors. Restrictions: Output does not correspond to unweighted events, and cannot be interfaced with a shower Monte Carlo. Additional comments: !!!!! The distribution file for this program is over 170 Mbytes and therefore is not delivered directly when download or E-mail is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: One day for total cross sections with 0.1% integration errors assuming typical cuts, up to 1 week for smooth kinematic distributions with sub-percent integration errors for each bin.

  5. SU-C-201-03: Coded Aperture Gamma-Ray Imaging Using Pixelated Semiconductor Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, S; Kaye, W; Jaworski, J

    2015-06-15

    Purpose: Improved localization of gamma-ray emissions from radiotracers is essential to the progress of nuclear medicine. Polaris is a portable, room-temperature operated gamma-ray imaging spectrometer composed of two 3×3 arrays of thick CdZnTe (CZT) detectors, which detect gammas between 30keV and 3MeV with energy resolution of <1% FWHM at 662keV. Compton imaging is used to map out source distributions in 4-pi space; however, is only effective above 300keV where Compton scatter is dominant. This work extends imaging to photoelectric energies (<300keV) using coded aperture imaging (CAI), which is essential for localization of Tc-99m (140keV). Methods: CAI, similar to the pinholemore » camera, relies on an attenuating mask, with open/closed elements, placed between the source and position-sensitive detectors. Partial attenuation of the source results in a “shadow” or count distribution that closely matches a portion of the mask pattern. Ideally, each source direction corresponds to a unique count distribution. Using backprojection reconstruction, the source direction is determined within the field of view. The knowledge of 3D position of interaction results in improved image quality. Results: Using a single array of detectors, a coded aperture mask, and multiple Co-57 (122keV) point sources, image reconstruction is performed in real-time, on an event-by-event basis, resulting in images with an angular resolution of ∼6 degrees. Although material nonuniformities contribute to image degradation, the superposition of images from individual detectors results in improved SNR. CAI was integrated with Compton imaging for a seamless transition between energy regimes. Conclusion: For the first time, CAI has been applied to thick, 3D position sensitive CZT detectors. Real-time, combined CAI and Compton imaging is performed using two 3×3 detector arrays, resulting in a source distribution in space. This system has been commercialized by H3D, Inc. and is being acquired for various applications worldwide, including proton therapy imaging R&D.« less

  6. NLSEmagic: Nonlinear Schrödinger equation multi-dimensional Matlab-based GPU-accelerated integrators using compact high-order schemes

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.

    2013-04-01

    We present a simple to use, yet powerful code package called NLSEmagic to numerically integrate the nonlinear Schrödinger equation in one, two, and three dimensions. NLSEmagic is a high-order finite-difference code package which utilizes graphic processing unit (GPU) parallel architectures. The codes running on the GPU are many times faster than their serial counterparts, and are much cheaper to run than on standard parallel clusters. The codes are developed with usability and portability in mind, and therefore are written to interface with MATLAB utilizing custom GPU-enabled C codes with the MEX-compiler interface. The packages are freely distributed, including user manuals and set-up files. Catalogue identifier: AEOJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 124453 No. of bytes in distributed program, including test data, etc.: 4728604 Distribution format: tar.gz Programming language: C, CUDA, MATLAB. Computer: PC, MAC. Operating system: Windows, MacOS, Linux. Has the code been vectorized or parallelized?: Yes. Number of processors used: Single CPU, number of GPU processors dependent on chosen GPU card (max is currently 3072 cores on GeForce GTX 690). Supplementary material: Setup guide, Installation guide. RAM: Highly dependent on dimensionality and grid size. For typical medium-large problem size in three dimensions, 4GB is sufficient. Keywords: Nonlinear Schröodinger Equation, GPU, high-order finite difference, Bose-Einstien condensates. Classification: 4.3, 7.7. Nature of problem: Integrate solutions of the time-dependent one-, two-, and three-dimensional cubic nonlinear Schrödinger equation. Solution method: The integrators utilize a fully-explicit fourth-order Runge-Kutta scheme in time and both second- and fourth-order differencing in space. The integrators are written to run on NVIDIA GPUs and are interfaced with MATLAB including built-in visualization and analysis tools. Restrictions: The main restriction for the GPU integrators is the amount of RAM on the GPU as the code is currently only designed for running on a single GPU. Unusual features: Ability to visualize real-time simulations through the interaction of MATLAB and the compiled GPU integrators. Additional comments: Setup guide and Installation guide provided. Program has a dedicated web site at www.nlsemagic.com. Running time: A three-dimensional run with a grid dimension of 87×87×203 for 3360 time steps (100 non-dimensional time units) takes about one and a half minutes on a GeForce GTX 580 GPU card.

  7. Constrained coding for the deep-space optical channel

    NASA Technical Reports Server (NTRS)

    Moision, B. E.; Hamkins, J.

    2002-01-01

    We investigate methods of coding for a channel subject to a large dead-time constraint, i.e. a constraint on the minimum spacing between transmitted pulses, with the deep-space optical channel as the motivating example.

  8. Spatial information outflow from the hippocampal circuit: distributed spatial coding and phase precession in the subiculum.

    PubMed

    Kim, Steve M; Ganguli, Surya; Frank, Loren M

    2012-08-22

    Hippocampal place cells convey spatial information through a combination of spatially selective firing and theta phase precession. The way in which this information influences regions like the subiculum that receive input from the hippocampus remains unclear. The subiculum receives direct inputs from area CA1 of the hippocampus and sends divergent output projections to many other parts of the brain, so we examined the firing patterns of rat subicular neurons. We found a substantial transformation in the subicular code for space from sparse to dense firing rate representations along a proximal-distal anatomical gradient: neurons in the proximal subiculum are more similar to canonical, sparsely firing hippocampal place cells, whereas neurons in the distal subiculum have higher firing rates and more distributed spatial firing patterns. Using information theory, we found that the more distributed spatial representation in the subiculum carries, on average, more information about spatial location and context than the sparse spatial representation in CA1. Remarkably, despite the disparate firing rate properties of subicular neurons, we found that neurons at all proximal-distal locations exhibit robust theta phase precession, with similar spiking oscillation frequencies as neurons in area CA1. Our findings suggest that the subiculum is specialized to compress sparse hippocampal spatial codes into highly informative distributed codes suitable for efficient communication to other brain regions. Moreover, despite this substantial compression, the subiculum maintains finer scale temporal properties that may allow it to participate in oscillatory phase coding and spike timing-dependent plasticity in coordination with other regions of the hippocampal circuit.

  9. Arbitrary numbers counter fair decisions: trails of markedness in card distribution.

    PubMed

    Schroeder, Philipp A; Pfister, Roland

    2015-01-01

    Converging evidence from controlled experiments suggests that the mere processing of a number and its attributes such as value or parity might affect free choice decisions between different actions. For example the spatial numerical associations of response codes (SNARC) effect indicates the magnitude of a digit to be associated with a spatial representation and might therefore affect spatial response choices (i.e., decisions between a "left" and a "right" option). At the same time, other (linguistic) features of a number such as parity are embedded into space and might likewise prime left or right responses through feature words [odd or even, respectively; markedness association of response codes (MARC) effect]. In this experiment we aimed at documenting such influences in a natural setting. We therefore assessed number-space and parity-space association effects by exposing participants to a fair distribution task in a card playing scenario. Participants drew cards, read out loud their number values, and announced their response choice, i.e., dealing it to a left vs. right player, indicated by Playmobil characters. Not only did participants prefer to deal more cards to the right player, the card's digits also affected response choices and led to a slightly but systematically unfair distribution, supported by a regular SNARC effect and counteracted by a reversed MARC effect. The experiment demonstrates the impact of SNARC- and MARC-like biases in free choice behavior through verbal and visual numerical information processing even in a setting with high external validity.

  10. LENSED: a code for the forward reconstruction of lenses and sources from strong lensing observations

    NASA Astrophysics Data System (ADS)

    Tessore, Nicolas; Bellagamba, Fabio; Metcalf, R. Benton

    2016-12-01

    Robust modelling of strong lensing systems is fundamental to exploit the information they contain about the distribution of matter in galaxies and clusters. In this work, we present LENSED, a new code which performs forward parametric modelling of strong lenses. LENSED takes advantage of a massively parallel ray-tracing kernel to perform the necessary calculations on a modern graphics processing unit (GPU). This makes the precise rendering of the background lensed sources much faster, and allows the simultaneous optimization of tens of parameters for the selected model. With a single run, the code is able to obtain the full posterior probability distribution for the lens light, the mass distribution and the background source at the same time. LENSED is first tested on mock images which reproduce realistic space-based observations of lensing systems. In this way, we show that it is able to recover unbiased estimates of the lens parameters, even when the sources do not follow exactly the assumed model. Then, we apply it to a subsample of the Sloan Lens ACS Survey lenses, in order to demonstrate its use on real data. The results generally agree with the literature, and highlight the flexibility and robustness of the algorithm.

  11. CoCoNuT: General relativistic hydrodynamics code with dynamical space-time evolution

    NASA Astrophysics Data System (ADS)

    Dimmelmeier, Harald; Novak, Jérôme; Cerdá-Durán, Pablo

    2012-02-01

    CoCoNuT is a general relativistic hydrodynamics code with dynamical space-time evolution. The main aim of this numerical code is the study of several astrophysical scenarios in which general relativity can play an important role, namely the collapse of rapidly rotating stellar cores and the evolution of isolated neutron stars. The code has two flavors: CoCoA, the axisymmetric (2D) magnetized version, and CoCoNuT, the 3D non-magnetized version.

  12. Entropic Lattice Boltzmann Simulations of Turbulence

    NASA Astrophysics Data System (ADS)

    Keating, Brian; Vahala, George; Vahala, Linda; Soe, Min; Yepez, Jeffrey

    2006-10-01

    Because of its simplicity, nearly perfect parallelization and vectorization on supercomputer platforms, lattice Boltzmann (LB) methods hold great promise for simulations of nonlinear physics. Indeed, our MHD-LB code has the best sustained performance/PE of any code on the Earth Simulator. By projecting into the higher dimensional kinetic phase space, the solution trajectory is simpler and much easier to compute than standard CFD approach. However, simple LB -- with its simple advection and local BGK collisional relaxation -- does not impose positive definiteness of the distribution functions in the time evolution. This leads to numerical instabilities for very low transport coefficients. In Entropic LB (ELB) one determines a discrete H-theorem and the equilibrium distribution functions subject to the collisional invariants. The ELB algorithm is unconditionally stable to arbitrary small transport coefficients. Various choices of velocity discretization are examined: 15, 19 and 27-bit ELB models. The connection between Tsallis and Boltzmann entropies are clarified.

  13. Performance analysis of an OAM multiplexing-based MIMO FSO system over atmospheric turbulence using space-time coding with channel estimation.

    PubMed

    Zhang, Yan; Wang, Ping; Guo, Lixin; Wang, Wei; Tian, Hongxin

    2017-08-21

    The average bit error rate (ABER) performance of an orbital angular momentum (OAM) multiplexing-based free-space optical (FSO) system with multiple-input multiple-output (MIMO) architecture has been investigated over atmospheric turbulence considering channel estimation and space-time coding. The impact of different types of space-time coding, modulation orders, turbulence strengths, receive antenna numbers on the transmission performance of this OAM-FSO system is also taken into account. On the basis of the proposed system model, the analytical expressions of the received signals carried by the k-th OAM mode of the n-th receive antenna for the vertical bell labs layered space-time (V-Blast) and space-time block codes (STBC) are derived, respectively. With the help of channel estimator carrying out with least square (LS) algorithm, the zero-forcing criterion with ordered successive interference cancellation criterion (ZF-OSIC) equalizer of V-Blast scheme and Alamouti decoder of STBC scheme are adopted to mitigate the performance degradation induced by the atmospheric turbulence. The results show that the ABERs obtained by channel estimation have excellent agreement with those of turbulence phase screen simulations. The ABERs of this OAM multiplexing-based MIMO system deteriorate with the increase of turbulence strengths. And both V-Blast and STBC schemes can significantly improve the system performance by mitigating the distortions of atmospheric turbulence as well as additive white Gaussian noise (AWGN). In addition, the ABER performances of both space-time coding schemes can be further enhanced by increasing the number of receive antennas for the diversity gain and STBC outperforms V-Blast in this system for data recovery. This work is beneficial to the OAM FSO system design.

  14. Comparison of viscous-shock-layer solutions by time-asymptotic and steady-state methods. [flow distribution around a Jupiter entry probe

    NASA Technical Reports Server (NTRS)

    Gupta, R. N.; Moss, J. N.; Simmonds, A. L.

    1982-01-01

    Two flow-field codes employing the time- and space-marching numerical techniques were evaluated. Both methods were used to analyze the flow field around a massively blown Jupiter entry probe under perfect-gas conditions. In order to obtain a direct point-by-point comparison, the computations were made by using identical grids and turbulence models. For the same degree of accuracy, the space-marching scheme takes much less time as compared to the time-marching method and would appear to provide accurate results for the problems with nonequilibrium chemistry, free from the effect of local differences in time on the final solution which is inherent in time-marching methods. With the time-marching method, however, the solutions are obtainable for the realistic entry probe shapes with massive or uniform surface blowing rates; whereas, with the space-marching technique, it is difficult to obtain converged solutions for such flow conditions. The choice of the numerical method is, therefore, problem dependent. Both methods give equally good results for the cases where results are compared with experimental data.

  15. Adaptive Grid Refinement for Atmospheric Boundary Layer Simulations

    NASA Astrophysics Data System (ADS)

    van Hooft, Antoon; van Heerwaarden, Chiel; Popinet, Stephane; van der linden, Steven; de Roode, Stephan; van de Wiel, Bas

    2017-04-01

    We validate and benchmark an adaptive mesh refinement (AMR) algorithm for numerical simulations of the atmospheric boundary layer (ABL). The AMR technique aims to distribute the computational resources efficiently over a domain by refining and coarsening the numerical grid locally and in time. This can be beneficial for studying cases in which length scales vary significantly in time and space. We present the results for a case describing the growth and decay of a convective boundary layer. The AMR results are benchmarked against two runs using a fixed, fine meshed grid. First, with the same numerical formulation as the AMR-code and second, with a code dedicated to ABL studies. Compared to the fixed and isotropic grid runs, the AMR algorithm can coarsen and refine the grid such that accurate results are obtained whilst using only a fraction of the grid cells. Performance wise, the AMR run was cheaper than the fixed and isotropic grid run with similar numerical formulations. However, for this specific case, the dedicated code outperformed both aforementioned runs.

  16. 14 CFR 234.10 - Voluntary disclosure of on-time performance codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Voluntary disclosure of on-time performance codes. 234.10 Section 234.10 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.10 Voluntary...

  17. 14 CFR 234.9 - Reporting of on-time performance codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Reporting of on-time performance codes. 234.9 Section 234.9 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.9 Reporting of...

  18. 14 CFR 234.8 - Calculation of on-time performance codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Calculation of on-time performance codes. 234.8 Section 234.8 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS AIRLINE SERVICE QUALITY PERFORMANCE REPORTS § 234.8 Calculation...

  19. A high-performance Fortran code to calculate spin- and parity-dependent nuclear level densities

    NASA Astrophysics Data System (ADS)

    Sen'kov, R. A.; Horoi, M.; Zelevinsky, V. G.

    2013-01-01

    A high-performance Fortran code is developed to calculate the spin- and parity-dependent shell model nuclear level densities. The algorithm is based on the extension of methods of statistical spectroscopy and implies exact calculation of the first and second Hamiltonian moments for different configurations at fixed spin and parity. The proton-neutron formalism is used. We have applied the method for calculating the level densities for a set of nuclei in the sd-, pf-, and pf+g- model spaces. Examples of the calculations for 28Si (in the sd-model space) and 64Ge (in the pf+g-model space) are presented. To illustrate the power of the method we estimate the ground state energy of 64Ge in the larger model space pf+g, which is not accessible to direct shell model diagonalization due to the prohibitively large dimension, by comparing with the nuclear level densities at low excitation energy calculated in the smaller model space pf. Program summaryProgram title: MM Catalogue identifier: AENM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 193181 No. of bytes in distributed program, including test data, etc.: 1298585 Distribution format: tar.gz Programming language: Fortran 90, MPI. Computer: Any architecture with a Fortran 90 compiler and MPI. Operating system: Linux. RAM: Proportional to the system size, in our examples, up to 75Mb Classification: 17.15. External routines: MPICH2 (http://www.mcs.anl.gov/research/projects/mpich2/) Nature of problem: Calculating of the spin- and parity-dependent nuclear level density. Solution method: The algorithm implies exact calculation of the first and second Hamiltonian moments for different configurations at fixed spin and parity. The code is parallelized using the Message Passing Interface and a master-slaves dynamical load-balancing approach. Restrictions: The program uses two-body interaction in a restricted single-level basis. For example, GXPF1A in the pf-valence space. Running time: Depends on the system size and the number of processors used (from 1 min to several hours).

  20. Maxwell: A semi-analytic 4D code for earthquake cycle modeling of transform fault systems

    NASA Astrophysics Data System (ADS)

    Sandwell, David; Smith-Konter, Bridget

    2018-05-01

    We have developed a semi-analytic approach (and computational code) for rapidly calculating 3D time-dependent deformation and stress caused by screw dislocations imbedded within an elastic layer overlying a Maxwell viscoelastic half-space. The maxwell model is developed in the Fourier domain to exploit the computational advantages of the convolution theorem, hence substantially reducing the computational burden associated with an arbitrarily complex distribution of force couples necessary for fault modeling. The new aspect of this development is the ability to model lateral variations in shear modulus. Ten benchmark examples are provided for testing and verification of the algorithms and code. One final example simulates interseismic deformation along the San Andreas Fault System where lateral variations in shear modulus are included to simulate lateral variations in lithospheric structure.

  1. Comparison of space radiation calculations for deterministic and Monte Carlo transport codes

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo

    For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other in that deterministic codes are very fast while Monte Carlo codes are more elaborate. Therefore it is important to investigate how well the results of deterministic codes compare with those of Monte Carlo transport codes and where they differ. In this study we evaluate these different codes in their space radiation applications by comparing their output results in the same given space radiation environments, shielding geometry and material. Typical space radiation environments such as the 1977 solar minimum galactic cosmic ray environment are used as the well-defined input, and simple geometries made of aluminum, water and/or polyethylene are used to represent the shielding material. We then compare various outputs of these codes, such as the dose-depth curves and the flux spectra of different fragments and other secondary particles. These comparisons enable us to learn more about the main differences between these space radiation transport codes. At the same time, they help us to learn the qualitative and quantitative features that these transport codes have in common.

  2. Intermittent Fermi-Pasta-Ulam Dynamics at Equilibrium

    NASA Astrophysics Data System (ADS)

    Campbell, David; Danieli, Carlo; Flach, Sergej

    The equilibrium value of an observable defines a manifold in the phase space of an ergodic and equipartitioned many-body syste. A typical trajectory pierces that manifold infinitely often as time goes to infinity. We use these piercings to measure both the relaxation time of the lowest frequency eigenmode of the Fermi-Pasta-Ulam chain, as well as the fluctuations of the subsequent dynamics in equilibrium. We show that previously obtained scaling laws for equipartition times are modified at low energy density due to an unexpected slowing down of the relaxation. The dynamics in equilibrium is characterized by a power-law distribution of excursion times far off equilibrium, with diverging variance. The long excursions arise from sticky dynamics close to regular orbits in the phase space. Our method is generalizable to large classes of many-body systems. The authors acknowledge financial support from IBS (Project Code IBS-R024-D1).

  3. Bandwidth efficient CCSDS coding standard proposals

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Perez, Lance C.; Wang, Fu-Quan

    1992-01-01

    The basic concatenated coding system for the space telemetry channel consists of a Reed-Solomon (RS) outer code, a symbol interleaver/deinterleaver, and a bandwidth efficient trellis inner code. A block diagram of this configuration is shown. The system may operate with or without the outer code and interleaver. In this recommendation, the outer code remains the (255,223) RS code over GF(2 exp 8) with an error correcting capability of t = 16 eight bit symbols. This code's excellent performance and the existence of fast, cost effective, decoders justify its continued use. The purpose of the interleaver/deinterleaver is to distribute burst errors out of the inner decoder over multiple codewords of the outer code. This utilizes the error correcting capability of the outer code more efficiently and reduces the probability of an RS decoder failure. Since the space telemetry channel is not considered bursty, the required interleaving depth is primarily a function of the inner decoding method. A diagram of an interleaver with depth 4 that is compatible with the (255,223) RS code is shown. Specific interleaver requirements are discussed after the inner code recommendations.

  4. Representing Color Ensembles.

    PubMed

    Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni

    2017-10-01

    Colors are rarely uniform, yet little is known about how people represent color distributions. We introduce a new method for studying color ensembles based on intertrial learning in visual search. Participants looked for an oddly colored diamond among diamonds with colors taken from either uniform or Gaussian color distributions. On test trials, the targets had various distances in feature space from the mean of the preceding distractor color distribution. Targets on test trials therefore served as probes into probabilistic representations of distractor colors. Test-trial response times revealed a striking similarity between the physical distribution of colors and their internal representations. The results demonstrate that the visual system represents color ensembles in a more detailed way than previously thought, coding not only mean and variance but, most surprisingly, the actual shape (uniform or Gaussian) of the distribution of colors in the environment.

  5. Monte Carlo simulation of electron beams from an accelerator head using PENELOPE.

    PubMed

    Sempau, J; Sánchez-Reyes, A; Salvat, F; ben Tahar, H O; Jiang, S B; Fernández-Varea, J M

    2001-04-01

    The Monte Carlo code PENELOPE has been used to simulate electron beams from a Siemens Mevatron KDS linac with nominal energies of 6, 12 and 18 MeV. Owing to its accuracy, which stems from that of the underlying physical interaction models, PENELOPE is suitable for simulating problems of interest to the medical physics community. It includes a geometry package that allows the definition of complex quadric geometries, such as those of irradiation instruments, in a straightforward manner. Dose distributions in water simulated with PENELOPE agree well with experimental measurements using a silicon detector and a monitoring ionization chamber. Insertion of a lead slab in the incident beam at the surface of the water phantom produces sharp variations in the dose distributions, which are correctly reproduced by the simulation code. Results from PENELOPE are also compared with those of equivalent simulations with the EGS4-based user codes BEAM and DOSXYZ. Angular and energy distributions of electrons and photons in the phase-space plane (at the downstream end of the applicator) obtained from both simulation codes are similar, although significant differences do appear in some cases. These differences, however, are shown to have a negligible effect on the calculated dose distributions. Various practical aspects of the simulations, such as the calculation of statistical uncertainties and the effect of the 'latent' variance in the phase-space file, are discussed in detail.

  6. Modeling solvation effects in real-space and real-time within density functional approaches

    NASA Astrophysics Data System (ADS)

    Delgado, Alain; Corni, Stefano; Pittalis, Stefano; Rozzi, Carlo Andrea

    2015-10-01

    The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that are close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the Octopus code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.

  7. Modeling solvation effects in real-space and real-time within density functional approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delgado, Alain; Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear, Calle 30 # 502, 11300 La Habana; Corni, Stefano

    2015-10-14

    The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that aremore » close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the OCTOPUS code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.« less

  8. On a model of three-dimensional bursting and its parallel implementation

    NASA Astrophysics Data System (ADS)

    Tabik, S.; Romero, L. F.; Garzón, E. M.; Ramos, J. I.

    2008-04-01

    A mathematical model for the simulation of three-dimensional bursting phenomena and its parallel implementation are presented. The model consists of four nonlinearly coupled partial differential equations that include fast and slow variables, and exhibits bursting in the absence of diffusion. The differential equations have been discretized by means of a second-order accurate in both space and time, linearly-implicit finite difference method in equally-spaced grids. The resulting system of linear algebraic equations at each time level has been solved by means of the Preconditioned Conjugate Gradient (PCG) method. Three different parallel implementations of the proposed mathematical model have been developed; two of these implementations, i.e., the MPI and the PETSc codes, are based on a message passing paradigm, while the third one, i.e., the OpenMP code, is based on a shared space address paradigm. These three implementations are evaluated on two current high performance parallel architectures, i.e., a dual-processor cluster and a Shared Distributed Memory (SDM) system. A novel representation of the results that emphasizes the most relevant factors that affect the performance of the paralled implementations, is proposed. The comparative analysis of the computational results shows that the MPI and the OpenMP implementations are about twice more efficient than the PETSc code on the SDM system. It is also shown that, for the conditions reported here, the nonlinear dynamics of the three-dimensional bursting phenomena exhibits three stages characterized by asynchronous, synchronous and then asynchronous oscillations, before a quiescent state is reached. It is also shown that the fast system reaches steady state in much less time than the slow variables.

  9. Local active information storage as a tool to understand distributed neural information processing

    PubMed Central

    Wibral, Michael; Lizier, Joseph T.; Vögler, Sebastian; Priesemann, Viola; Galuske, Ralf

    2013-01-01

    Every act of information processing can in principle be decomposed into the component operations of information storage, transfer, and modification. Yet, while this is easily done for today's digital computers, the application of these concepts to neural information processing was hampered by the lack of proper mathematical definitions of these operations on information. Recently, definitions were given for the dynamics of these information processing operations on a local scale in space and time in a distributed system, and the specific concept of local active information storage was successfully applied to the analysis and optimization of artificial neural systems. However, no attempt to measure the space-time dynamics of local active information storage in neural data has been made to date. Here we measure local active information storage on a local scale in time and space in voltage sensitive dye imaging data from area 18 of the cat. We show that storage reflects neural properties such as stimulus preferences and surprise upon unexpected stimulus change, and in area 18 reflects the abstract concept of an ongoing stimulus despite the locally random nature of this stimulus. We suggest that LAIS will be a useful quantity to test theories of cortical function, such as predictive coding. PMID:24501593

  10. Power optimization of wireless media systems with space-time block codes.

    PubMed

    Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran

    2004-07-01

    We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes into consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and video source models, Rayleigh fading channels along with the Bernoulli/Gilbert-Elliott loss models, and space-time block codes.

  11. Fokker-Planck simulation of runaway electron generation in disruptions with the hot-tail effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuga, H., E-mail: nuga@p-grp.nucleng.kyoto-u.ac.jp; Fukuyama, A.; Yagi, M.

    2016-06-15

    To study runaway electron generation in disruptions, we have extended the three-dimensional (two-dimensional in momentum space; one-dimensional in the radial direction) Fokker-Planck code, which describes the evolution of the relativistic momentum distribution function of electrons and the induced toroidal electric field in a self-consistent manner. A particular focus is placed on the hot-tail effect in two-dimensional momentum space. The effect appears if the drop of the background plasma temperature is sufficiently rapid compared with the electron-electron slowing down time for a few times of the pre-quench thermal velocity. It contributes to not only the enhancement of the primary runaway electronmore » generation but also the broadening of the runaway electron distribution in the pitch angle direction. If the thermal energy loss during the major disruption is assumed to be isotropic, there are hot-tail electrons that have sufficiently large perpendicular momentum, and the runaway electron distribution becomes broader in the pitch angle direction. In addition, the pitch angle scattering also yields the broadening. Since the electric field is reduced due to the burst of runaway electron generation, the time required for accelerating electrons to the runaway region becomes longer. The longer acceleration period makes the pitch-angle scattering more effective.« less

  12. Stepwise inference of likely dynamic flux distributions from metabolic time series data.

    PubMed

    Faraji, Mojdeh; Voit, Eberhard O

    2017-07-15

    Most metabolic pathways contain more reactions than metabolites and therefore have a wide stoichiometric matrix that corresponds to infinitely many possible flux distributions that are perfectly compatible with the dynamics of the metabolites in a given dataset. This under-determinedness poses a challenge for the quantitative characterization of flux distributions from time series data and thus for the design of adequate, predictive models. Here we propose a method that reduces the degrees of freedom in a stepwise manner and leads to a dynamic flux distribution that is, in a statistical sense, likely to be close to the true distribution. We applied the proposed method to the lignin biosynthesis pathway in switchgrass. The system consists of 16 metabolites and 23 enzymatic reactions. It has seven degrees of freedom and therefore admits a large space of dynamic flux distributions that all fit a set of metabolic time series data equally well. The proposed method reduces this space in a systematic and biologically reasonable manner and converges to a likely dynamic flux distribution in just a few iterations. The estimated solution and the true flux distribution, which is known in this case, show excellent agreement and thereby lend support to the method. The computational model was implemented in MATLAB (version R2014a, The MathWorks, Natick, MA). The source code is available at https://github.gatech.edu/VoitLab/Stepwise-Inference-of-Likely-Dynamic-Flux-Distributions and www.bst.bme.gatech.edu/research.php . mojdeh@gatech.edu or eberhard.voit@bme.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  13. High-Fidelity Coupled Monte-Carlo/Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Ivanov, Aleksandar; Sanchez, Victor; Ivanov, Kostadin

    2014-06-01

    Monte Carlo methods have been used as reference reactor physics calculation tools worldwide. The advance in computer technology allows the calculation of detailed flux distributions in both space and energy. In most of the cases however, those calculations are done under the assumption of homogeneous material density and temperature distributions. The aim of this work is to develop a consistent methodology for providing realistic three-dimensional thermal-hydraulic distributions by coupling the in-house developed sub-channel code SUBCHANFLOW with the standard Monte-Carlo transport code MCNP. In addition to the innovative technique of on-the fly material definition, a flux-based weight-window technique has been introduced to improve both the magnitude and the distribution of the relative errors. Finally, a coupled code system for the simulation of steady-state reactor physics problems has been developed. Besides the problem of effective feedback data interchange between the codes, the treatment of temperature dependence of the continuous energy nuclear data has been investigated.

  14. Consistent Adjoint Driven Importance Sampling using Space, Energy and Angle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M

    2012-08-01

    For challenging radiation transport problems, hybrid methods combine the accuracy of Monte Carlo methods with the global information present in deterministic methods. One of the most successful hybrid methods is CADIS Consistent Adjoint Driven Importance Sampling. This method uses a deterministic adjoint solution to construct a biased source distribution and consistent weight windows to optimize a specific tally in a Monte Carlo calculation. The method has been implemented into transport codes using just the spatial and energy information from the deterministic adjoint and has been used in many applications to compute tallies with much higher figures-of-merit than analog calculations. CADISmore » also outperforms user-supplied importance values, which usually take long periods of user time to develop. This work extends CADIS to develop weight windows that are a function of the position, energy, and direction of the Monte Carlo particle. Two types of consistent source biasing are presented: one method that biases the source in space and energy while preserving the original directional distribution and one method that biases the source in space, energy, and direction. Seven simple example problems are presented which compare the use of the standard space/energy CADIS with the new space/energy/angle treatments.« less

  15. Modeling the propagation of nonlinear three-dimensional acoustic beams in inhomogeneous media.

    PubMed

    Jing, Yuan; Cleveland, Robin O

    2007-09-01

    A three-dimensional model of the forward propagation of nonlinear sound beams in inhomogeneous media, a generalized Khokhlov-Zabolotskaya-Kuznetsov equation, is described. The Texas time-domain code (which accounts for paraxial diffraction, nonlinearity, thermoviscous absorption, and absorption and dispersion associated with multiple relaxation processes) was extended to solve for the propagation of nonlinear beams for the case where all medium properties vary in space. The code was validated with measurements of the nonlinear acoustic field generated by a phased array transducer operating at 2.5 MHz in water. A nonuniform layer of gel was employed to create an inhomogeneous medium. There was good agreement between the code and measurements in capturing the shift in the pressure distribution of both the fundamental and second harmonic due to the gel layer. The results indicate that the numerical tool described here is appropriate for propagation of nonlinear sound beams through weakly inhomogeneous media.

  16. An Initial Study of the Sensitivity of Aircraft Vortex Spacing System (AVOSS) Spacing Sensitivity to Weather and Configuration Input Parameters

    NASA Technical Reports Server (NTRS)

    Riddick, Stephen E.; Hinton, David A.

    2000-01-01

    A study has been performed on a computer code modeling an aircraft wake vortex spacing system during final approach. This code represents an initial engineering model of a system to calculate reduced approach separation criteria needed to increase airport productivity. This report evaluates model sensitivity toward various weather conditions (crosswind, crosswind variance, turbulent kinetic energy, and thermal gradient), code configurations (approach corridor option, and wake demise definition), and post-processing techniques (rounding of provided spacing values, and controller time variance).

  17. GASFLOW: A Computational Fluid Dynamics Code for Gases, Aerosols, and Combustion, Volume 3: Assessment Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Müller, C.; Hughes, E. D.; Niederauer, G. F.

    1998-10-01

    Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best- estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the wallsmore » and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containment and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included. Volume III contains some of the assessments performed by LANL and FzK« less

  18. Mapping the Milky Way Galaxy with LISA

    NASA Technical Reports Server (NTRS)

    McKinnon, Jose A.; Littenberg, Tyson

    2012-01-01

    Gravitational wave detectors in the mHz band (such as the Laser Interferometer Space Antenna, or LISA) will observe thousands of compact binaries in the galaxy which can be used to better understand the structure of the Milky Way. To test the effectiveness of LISA to measure the distribution of the galaxy, we simulated the Close White Dwarf Binary (CWDB) gravitational wave sky using different models for the Milky Way. To do so, we have developed a galaxy density distribution modeling code based on the Markov Chain Monte Carlo method. The code uses different distributions to construct realizations of the galaxy. We then use the Fisher Information Matrix to estimate the variance and covariance of the recovered parameters for each detected CWDB. This is the first step toward characterizing the capabilities of space-based gravitational wave detectors to constrain models for galactic structure, such as the size and orientation of the bar in the center of the Milky Way

  19. Visualization tool for three-dimensional plasma velocity distributions (ISEE_3D) as a plug-in for SPEDAS

    NASA Astrophysics Data System (ADS)

    Keika, Kunihiro; Miyoshi, Yoshizumi; Machida, Shinobu; Ieda, Akimasa; Seki, Kanako; Hori, Tomoaki; Miyashita, Yukinaga; Shoji, Masafumi; Shinohara, Iku; Angelopoulos, Vassilis; Lewis, Jim W.; Flores, Aaron

    2017-12-01

    This paper introduces ISEE_3D, an interactive visualization tool for three-dimensional plasma velocity distribution functions, developed by the Institute for Space-Earth Environmental Research, Nagoya University, Japan. The tool provides a variety of methods to visualize the distribution function of space plasma: scatter, volume, and isosurface modes. The tool also has a wide range of functions, such as displaying magnetic field vectors and two-dimensional slices of distributions to facilitate extensive analysis. The coordinate transformation to the magnetic field coordinates is also implemented in the tool. The source codes of the tool are written as scripts of a widely used data analysis software language, Interactive Data Language, which has been widespread in the field of space physics and solar physics. The current version of the tool can be used for data files of the plasma distribution function from the Geotail satellite mission, which are publicly accessible through the Data Archives and Transmission System of the Institute of Space and Astronautical Science (ISAS)/Japan Aerospace Exploration Agency (JAXA). The tool is also available in the Space Physics Environment Data Analysis Software to visualize plasma data from the Magnetospheric Multiscale and the Time History of Events and Macroscale Interactions during Substorms missions. The tool is planned to be applied to data from other missions, such as Arase (ERG) and Van Allen Probes after replacing or adding data loading plug-ins. This visualization tool helps scientists understand the dynamics of space plasma better, particularly in the regions where the magnetohydrodynamic approximation is not valid, for example, the Earth's inner magnetosphere, magnetopause, bow shock, and plasma sheet.

  20. Multi-Aperture Digital Coherent Combining for Free-Space Optical Communication Receivers

    DTIC Science & Technology

    2016-04-21

    Distribution A: Public Release; unlimited distribution 2016 Optical Society of America OCIS codes: (060.1660) Coherent communications; (070.2025) Discrete ...Coherent combining algorithm Multi-aperture coherent combining enables using many discrete apertures together to create a large effective aperture. A

  1. Time-dependent transport of energetic particles in magnetic turbulence: computer simulations versus analytical theory

    NASA Astrophysics Data System (ADS)

    Arendt, V.; Shalchi, A.

    2018-06-01

    We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.

  2. Transport calculations and accelerator experiments needed for radiation risk assessment in space.

    PubMed

    Sihver, Lembit

    2008-01-01

    The major uncertainties on space radiation risk estimates in humans are associated to the poor knowledge of the biological effects of low and high LET radiation, with a smaller contribution coming from the characterization of space radiation field and its primary interactions with the shielding and the human body. However, to decrease the uncertainties on the biological effects and increase the accuracy of the risk coefficients for charged particles radiation, the initial charged-particle spectra from the Galactic Cosmic Rays (GCRs) and the Solar Particle Events (SPEs), and the radiation transport through the shielding material of the space vehicle and the human body, must be better estimated Since it is practically impossible to measure all primary and secondary particles from all possible position-projectile-target-energy combinations needed for a correct risk assessment in space, accurate particle and heavy ion transport codes must be used. These codes are also needed when estimating the risk for radiation induced failures in advanced microelectronics, such as single-event effects, etc., and the efficiency of different shielding materials. It is therefore important that the models and transport codes will be carefully benchmarked and validated to make sure they fulfill preset accuracy criteria, e.g. to be able to predict particle fluence, dose and energy distributions within a certain accuracy. When validating the accuracy of the transport codes, both space and ground based accelerator experiments are needed The efficiency of passive shielding and protection of electronic devices should also be tested in accelerator experiments and compared to simulations using different transport codes. In this paper different multipurpose particle and heavy ion transport codes will be presented, different concepts of shielding and protection discussed, as well as future accelerator experiments needed for testing and validating codes and shielding materials.

  3. Composite load spectra for select space propulsion structural components

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Kurth, R. E.; Ho, H.

    1986-01-01

    A multiyear program is performed with the objective to develop generic load models with multiple levels of progressive sophistication to simulate the composite (combined) load spectra that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades, and liquid oxygen (LOX) posts. Progress of the first year's effort includes completion of a sufficient portion of each task -- probabilistic models, code development, validation, and an initial operational code. This code has from its inception an expert system philosophy that could be added to throughout the program and in the future. The initial operational code is only applicable to turbine blade type loadings. The probabilistic model included in the operational code has fitting routines for loads that utilize a modified Discrete Probabilistic Distribution termed RASCAL, a barrier crossing method and a Monte Carlo method. An initial load model was developed by Battelle that is currently used for the slowly varying duty cycle type loading. The intent is to use the model and related codes essentially in the current form for all loads that are based on measured or calculated data that have followed a slowly varying profile.

  4. The Simpsons program 6-D phase space tracking with acceleration

    NASA Astrophysics Data System (ADS)

    Machida, S.

    1993-12-01

    A particle tracking code, Simpsons, in 6-D phase space including energy ramping has been developed to model proton synchrotrons and storage rings. We take time as the independent variable to change machine parameters and diagnose beam quality in a quite similar way as real machines, unlike existing tracking codes for synchrotrons which advance a particle element by element. Arbitrary energy ramping and rf voltage curves as a function of time are read as an input file for defining a machine cycle. The code is used to study beam dynamics with time dependent parameters. Some of the examples from simulations of the Superconducting Super Collider (SSC) boosters are shown.

  5. Interference between Space and Time Estimations: From Behavior to Neurons.

    PubMed

    Marcos, Encarni; Genovesio, Aldo

    2017-01-01

    Influences between time and space can be found in our daily life in which we are surrounded by numerous spatial metaphors to refer to time. For instance, when we move files from one folder to another in our computer a horizontal line that grows from left to right informs us about the elapsed and remaining time to finish the procedure and, similarly, in our communication we use several spatial terms to refer to time. Although with some differences in the degree of interference, not only space has an influence on time but both magnitudes influence each other. Indeed, since our childhood our estimations of time are influenced by space even when space should be irrelevant and the same occurs when estimating space with time as distractor. Such interference between magnitudes has also been observed in monkeys even if they do not use language or computers, suggesting that the two magnitudes are tightly coupled beyond communication and technology. Imaging and lesion studies have indicated that same brain areas are involved during the processing of both magnitudes and have suggested that rather than coding the specific magnitude itself the brain represents them as abstract concepts. Recent neurophysiological studies in prefrontal cortex, however, have shown that the coding of absolute and relative space and time in this area is realized by independent groups of neurons. Interestingly, instead, a high overlap was observed in this same area in the coding of goal choices across tasks. These results suggest that rather than during perception or estimation of space and time the interference between the two magnitudes might occur, at least in the prefrontal cortex, in a subsequent phase in which the goal has to be chosen or the response provided.

  6. QRAP: A numerical code for projected (Q)uasiparticle (RA)ndom (P)hase approximation

    NASA Astrophysics Data System (ADS)

    Samana, A. R.; Krmpotić, F.; Bertulani, C. A.

    2010-06-01

    A computer code for quasiparticle random phase approximation - QRPA and projected quasiparticle random phase approximation - PQRPA models of nuclear structure is explained in details. The residual interaction is approximated by a simple δ-force. An important application of the code consists in evaluating nuclear matrix elements involved in neutrino-nucleus reactions. As an example, cross sections for 56Fe and 12C are calculated and the code output is explained. The application to other nuclei and the description of other nuclear and weak decay processes are also discussed. Program summaryTitle of program: QRAP ( Quasiparticle RAndom Phase approximation) Computers: The code has been created on a PC, but also runs on UNIX or LINUX machines Operating systems: WINDOWS or UNIX Program language used: Fortran-77 Memory required to execute with typical data: 16 Mbytes of RAM memory and 2 MB of hard disk space No. of lines in distributed program, including test data, etc.: ˜ 8000 No. of bytes in distributed program, including test data, etc.: ˜ 256 kB Distribution format: tar.gz Nature of physical problem: The program calculates neutrino- and antineutrino-nucleus cross sections as a function of the incident neutrino energy, and muon capture rates, using the QRPA or PQRPA as nuclear structure models. Method of solution: The QRPA, or PQRPA, equations are solved in a self-consistent way for even-even nuclei. The nuclear matrix elements for the neutrino-nucleus interaction are treated as the beta inverse reaction of odd-odd nuclei as function of the transfer momentum. Typical running time: ≈ 5 min on a 3 GHz processor for Data set 1.

  7. A computational and theoretical analysis of falling frequency VLF emissions

    NASA Astrophysics Data System (ADS)

    Nunn, David; Omura, Yoshiharu

    2012-08-01

    Recently much progress has been made in the simulation and theoretical understanding of rising frequency triggered emissions and rising chorus. Both PIC and Vlasov VHS codes produce risers in the region downstream from the equator toward which the VLF waves are traveling. The VHS code only produces fallers or downward hooks with difficulty due to the coherent nature of wave particle interaction across the equator. With the VHS code we now confine the interaction region to be the region upstream from the equator, where inhomogeneity factor S is positive. This suppresses correlated wave particle interaction effects across the equator and the tendency of the code to trigger risers, and permits the formation of a proper falling tone generation region. The VHS code now easily and reproducibly triggers falling tones. The evolution of resonant particle current JE in space and time shows a generation point at -5224 km and the wavefield undergoes amplification of some 25 dB in traversing the nonlinear generation region. The current component parallel to wave magnetic field (JB) is positive, whereas it is negative for risers. The resonant particle trap shows an enhanced distribution function or `hill', whereas risers have a `hole'. According to recent theory (Omura et al., 2008, 2009) sweeping frequency is due primarily to the advective term. The nonlinear frequency shift term is now negative (˜-12 Hz) and the sweep rate of -800 Hz/s is approximately nonlinear frequency shift divided by TN, the transition time, of the order of a trapping time.

  8. Modeling of dynamic effects of a low power laser beam

    NASA Technical Reports Server (NTRS)

    Lawrence, George N.; Scholl, Marija S.; Khatib, AL

    1988-01-01

    Methods of modeling some of the dynamic effects involved in laser beam propagation through the atmosphere are addressed with emphasis on the development of simple but accurate models which are readily implemented in a physical optics code. A space relay system with a ground based laser facility is considered as an example. The modeling of such characteristic phenomena as laser output distribution, flat and curved mirrors, diffraction propagation, atmospheric effects (aberration and wind shear), adaptive mirrors, jitter, and time integration of power on target, is discussed.

  9. Squeezed Back-to-Back Correlation of {D}^{0}{\\bar{D}}^{0} in Relativistic Heavy-Ion Collisions

    NASA Astrophysics Data System (ADS)

    Yang, Ai-Geng; Zhang, Yong; Cheng, Luan; Sun, Hao; Zhang, Wei-Ning

    2018-05-01

    We investigate the squeezed back-to-back correlation (BBC) of $D^0\\!{\\bar D}^0$ in relativistic heavy-ion collisions, using the in-medium mass modification calculated with a self-energy in hot pion gas and the source space-time distributions provided by the viscous hydrodynamic code VISH2+1. It is found that the BBC of $D^0\\!{\\bar D}^0$ is significant in peripheral Au+Au collisions at the RHIC energy. A possible way to detect the BBC in experiment is presented.

  10. Analysis of Delays in Transmitting Time Code Using an Automated Computer Time Distribution System

    DTIC Science & Technology

    1999-12-01

    jlevine@clock. bldrdoc.gov Abstract An automated computer time distribution system broadcasts standard tune to users using computers and modems via...contributed to &lays - sofhareplatform (50% of the delay), transmission speed of time- codes (25OA), telephone network (lS%), modem and others (10’4). The... modems , and telephone lines. Users dial the ACTS server to receive time traceable to the national time scale of Singapore, UTC(PSB). The users can in

  11. High-Order Local Pooling and Encoding Gaussians Over a Dictionary of Gaussians.

    PubMed

    Li, Peihua; Zeng, Hui; Wang, Qilong; Shiu, Simon C K; Zhang, Lei

    2017-07-01

    Local pooling (LP) in configuration (feature) space proposed by Boureau et al. explicitly restricts similar features to be aggregated, which can preserve as much discriminative information as possible. At the time it appeared, this method combined with sparse coding achieved competitive classification results with only a small dictionary. However, its performance lags far behind the state-of-the-art results as only the zero-order information is exploited. Inspired by the success of high-order statistical information in existing advanced feature coding or pooling methods, we make an attempt to address the limitation of LP. To this end, we present a novel method called high-order LP (HO-LP) to leverage the information higher than the zero-order one. Our idea is intuitively simple: we compute the first- and second-order statistics per configuration bin and model them as a Gaussian. Accordingly, we employ a collection of Gaussians as visual words to represent the universal probability distribution of features from all classes. Our problem is naturally formulated as encoding Gaussians over a dictionary of Gaussians as visual words. This problem, however, is challenging since the space of Gaussians is not a Euclidean space but forms a Riemannian manifold. We address this challenge by mapping Gaussians into the Euclidean space, which enables us to perform coding with common Euclidean operations rather than complex and often expensive Riemannian operations. Our HO-LP preserves the advantages of the original LP: pooling only similar features and using a small dictionary. Meanwhile, it achieves very promising performance on standard benchmarks, with either conventional, hand-engineered features or deep learning-based features.

  12. Three dimensional δf simulations of beams in the SSC

    NASA Astrophysics Data System (ADS)

    Koga, J.; Tajima, T.; Machida, S.

    1993-12-01

    A three dimensional δf strong-strong algorithm has been developed to apply to the study of such effects as space charge and beam-beam interaction phenomena in the Superconducting Super Collider (SSC). The algorithm is obtained from the merging of the particle tracking code Simpsons used for 3 dimensional space charge effects and a δf code. The δf method is used to follow the evolution of the non-gaussian part of the beam distribution. The advantages of this method are twofold. First, the Simpsons code utilizes a realistic accelerator model including synchrotron oscillations and energy ramping in 6 dimensional phase space with electromagnetic fields of the beams calculated using a realistic 3 dimensional field solver. Second, the beams are evolving in the fully self-consistent strong-strong sense with finite particle fluctuation noise is greatly reduced as opposed to the weak-strong models where one beam is fixed.

  13. Experimental identification of closely spaced modes using NExT-ERA

    NASA Astrophysics Data System (ADS)

    Hosseini Kordkheili, S. A.; Momeni Massouleh, S. H.; Hajirezayi, S.; Bahai, H.

    2018-01-01

    This article presents a study on the capability of the time domain OMA method, NExT-ERA, to identify closely spaced structural dynamic modes. A survey in the literature reveals that few experimental studies have been conducted on the effectiveness of the NExT-ERA methodology in case of closely spaced modes specifically. In this paper we present the formulation for NExT-ERA. This formulation is then implemented in an algorithm and a code, developed in house to identify the modal parameters of different systems using their generated time history data. Some numerical models are firstly investigated to validate the code. Two different case studies involving a plate with closely spaced modes and a pulley ring with greater extent of closeness in repeated modes are presented. Both structures are excited by random impulses under the laboratory condition. The resulting time response acceleration data are then used as input in the developed code to extract modal parameters of the structures. The accuracy of the results is checked against those obtained from experimental tests.

  14. A unified radiative magnetohydrodynamics code for lightning-like discharge simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Qiang, E-mail: cq0405@126.com; Chen, Bin, E-mail: emcchen@163.com; Xiong, Run

    2014-03-15

    A two-dimensional Eulerian finite difference code is developed for solving the non-ideal magnetohydrodynamic (MHD) equations including the effects of self-consistent magnetic field, thermal conduction, resistivity, gravity, and radiation transfer, which when combined with specified pulse current models and plasma equations of state, can be used as a unified lightning return stroke solver. The differential equations are written in the covariant form in the cylindrical geometry and kept in the conservative form which enables some high-accuracy shock capturing schemes to be equipped in the lightning channel configuration naturally. In this code, the 5-order weighted essentially non-oscillatory scheme combined with Lax-Friedrichs fluxmore » splitting method is introduced for computing the convection terms of the MHD equations. The 3-order total variation diminishing Runge-Kutta integral operator is also equipped to keep the time-space accuracy of consistency. The numerical algorithms for non-ideal terms, e.g., artificial viscosity, resistivity, and thermal conduction, are introduced in the code via operator splitting method. This code assumes the radiation is in local thermodynamic equilibrium with plasma components and the flux limited diffusion algorithm with grey opacities is implemented for computing the radiation transfer. The transport coefficients and equation of state in this code are obtained from detailed particle population distribution calculation, which makes the numerical model is self-consistent. This code is systematically validated via the Sedov blast solutions and then used for lightning return stroke simulations with the peak current being 20 kA, 30 kA, and 40 kA, respectively. The results show that this numerical model consistent with observations and previous numerical results. The population distribution evolution and energy conservation problems are also discussed.« less

  15. Analysis of solar receiver flux distributions for US/Russian solar dynamic system demonstration on the MIR Space Station

    NASA Technical Reports Server (NTRS)

    Kerslake, Thomas W.; Fincannon, James

    1995-01-01

    The United States and Russia have agreed to jointly develop a solar dynamic (SD) system for flight demonstration on the Russian MIR space station starting in late 1997. Two important components of this SD system are the solar concentrator and heat receiver provided by Russia and the U.S., respectively. This paper describes optical analysis of the concentrator and solar flux predictions on target receiver surfaces. The optical analysis is performed using the code CIRCE2. These analyses account for finite sun size with limb darkening, concentrator surface slope and position errors, concentrator petal thermal deformation, gaps between petals, and the shading effect of the receiver support struts. The receiver spatial flux distributions are then combined with concentrator shadowing predictions. Geometric shadowing patterns are traced from the concentrator to the target receiver surfaces. These patterns vary with time depending on the chosen MIR flight attitude and orbital mechanics of the MIR spacecraft. The resulting predictions provide spatial and temporal receiver flux distributions for any specified mission profile. The impact these flux distributions have on receiver design and control of the Brayton engine are discussed.

  16. Optimal patch code design via device characterization

    NASA Astrophysics Data System (ADS)

    Wu, Wencheng; Dalal, Edul N.

    2012-01-01

    In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.

  17. The orbital PDF: general inference of the gravitational potential from steady-state tracers

    NASA Astrophysics Data System (ADS)

    Han, Jiaxin; Wang, Wenting; Cole, Shaun; Frenk, Carlos S.

    2016-02-01

    We develop two general methods to infer the gravitational potential of a system using steady-state tracers, I.e. tracers with a time-independent phase-space distribution. Combined with the phase-space continuity equation, the time independence implies a universal orbital probability density function (oPDF) dP(λ|orbit) ∝ dt, where λ is the coordinate of the particle along the orbit. The oPDF is equivalent to Jeans theorem, and is the key physical ingredient behind most dynamical modelling of steady-state tracers. In the case of a spherical potential, we develop a likelihood estimator that fits analytical potentials to the system and a non-parametric method (`phase-mark') that reconstructs the potential profile, both assuming only the oPDF. The methods involve no extra assumptions about the tracer distribution function and can be applied to tracers with any arbitrary distribution of orbits, with possible extension to non-spherical potentials. The methods are tested on Monte Carlo samples of steady-state tracers in dark matter haloes to show that they are unbiased as well as efficient. A fully documented C/PYTHON code implementing our method is freely available at a GitHub repository linked from http://icc.dur.ac.uk/data/#oPDF.

  18. Simulation of Galactic Cosmic Rays and Dose-Rate Effects in RITRACKS

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Ponomarev, Artem; Slaba, Tony; Blattnig, Steve; Hada, Megumi

    2017-01-01

    The NASA Space Radiation Laboratory (NSRL) facility has been used successfully for many years to generate ion beams for radiation research experiments by NASA investigators. Recently, modifications were made to the beam lines to allow rapid switching between different types of ions and energies, with the aim to simulate the Galactic Cosmic Rays (GCR) environment. As this will be a focus of space radiation research for upcoming years, the stochastic radiation track structure code RITRACKS (Relativistic Ion Tracks) was modified to simulate beams of various ion types and energies during time intervals specified by the user at the microscopic and nanoscopic scales. For example, particle distributions of a mixed 344.1-MeV protons (18.04 cGy) and 950-MeV/n iron (5.64 cGy) beam behind a 20 g/cm(exp 2) aluminum followed by a 10 g/cm(exp 2) polyethylene shield as calculated by the code GEANT4 were used as an input field in RITRACKS. Similarly, modifications were also made to simulate a realistic radiation environment in a spacecraft exposed to GCR by sampling the ion types and energies from particle spectra pre-calculated by the code HZETRN. The newly implemented features allows RITRACKS to generate time-dependent differential and cumulative 3D dose voxel maps. These new capabilities of RITRACKS will be used to investigate dose-rate effects and synergistic interactions of various types of radiations for many end points at the microscopic and nanoscopic scales such as DNA damage and chromosome aberrations.

  19. Varying impacts of alcohol outlet densities on violent assaults: explaining differences across neighborhoods.

    PubMed

    Mair, Christina; Gruenewald, Paul J; Ponicki, William R; Remer, Lillian

    2013-01-01

    Groups of potentially violent drinkers may frequent areas of communities with large numbers of alcohol outlets, especially bars, leading to greater rates of alcohol-related assaults. This study assessed direct and moderating effects of bar densities on assaults across neighborhoods. We analyzed longitudinal population data relating alcohol outlet densities (total outlet density, proportion bars/pubs, proportion off-premise outlets) to hospitalizations for assault injuries in California across residential ZIP code areas from 1995 through 2008 (23,213 space-time units). Because few ZIP codes were consistently defined over 14 years and these units are not independent, corrections for unit misalignment and spatial autocorrelation were implemented using Bayesian space-time conditional autoregressive models. Assaults were related to outlet densities in local and surrounding areas, the mix of outlet types, and neighborhood characteristics. The addition of one outlet per square mile was related to a small 0.23% increase in assaults. A 10% greater proportion of bars in a ZIP code was related to 7.5% greater assaults, whereas a 10% greater proportion of bars in surrounding areas was related to 6.2% greater assaults. The impacts of bars were much greater in areas with low incomes and dense populations. The effect of bar density on assault injuries was well supported and positive, and the magnitude of the effect varied by neighborhood characteristics. Posterior distributions from these models enabled the identification of locations most vulnerable to problems related to alcohol outlets.

  20. The cleanroom case study in the Software Engineering Laboratory: Project description and early analysis

    NASA Technical Reports Server (NTRS)

    Green, Scott; Kouchakdjian, Ara; Basili, Victor; Weidow, David

    1990-01-01

    This case study analyzes the application of the cleanroom software development methodology to the development of production software at the NASA/Goddard Space Flight Center. The cleanroom methodology emphasizes human discipline in program verification to produce reliable software products that are right the first time. Preliminary analysis of the cleanroom case study shows that the method can be applied successfully in the FDD environment and may increase staff productivity and product quality. Compared to typical Software Engineering Laboratory (SEL) activities, there is evidence of lower failure rates, a more complete and consistent set of inline code documentation, a different distribution of phase effort activity, and a different growth profile in terms of lines of code developed. The major goals of the study were to: (1) assess the process used in the SEL cleanroom model with respect to team structure, team activities, and effort distribution; (2) analyze the products of the SEL cleanroom model and determine the impact on measures of interest, including reliability, productivity, overall life-cycle cost, and software quality; and (3) analyze the residual products in the application of the SEL cleanroom model, such as fault distribution, error characteristics, system growth, and computer usage.

  1. Development of a Grid-Based Gyro-Kinetic Simulation Code

    NASA Astrophysics Data System (ADS)

    Lapillonne, Xavier; Brunetti, Maura; Tran, Trach-Minh; Brunner, Stephan

    2006-10-01

    A grid-based semi-Lagrangian code using cubic spline interpolation is being developed at CRPP, for solving the electrostatic drift-kinetic equations [M. Brunetti et. al, Comp. Phys. Comm. 163, 1 (2004)] in a cylindrical system. This 4-dim code, CYGNE, is part of a project with long term aim of studying microturbulence in toroidal fusion devices, in the more general frame of gyro-kinetic equations. Towards their non-linear phase, the simulations from this code are subject to significant overshoot problems, reflected by the development of negative value regions of the distribution function, which leads to bad energy conservation. This has motivated the study of alternative schemes. On the one hand, new time integration algorithms are considered in the semi-Lagrangian frame. On the other hand, fully Eulerian schemes, which separate time and space discretisation (method of lines), are investigated. In particular, the Essentially Non Oscillatory (ENO) approach, constructed so as to minimize the overshoot problem, has been considered. All these methods have first been tested in the simpler case of the 2-dim guiding-center model for the Kelvin-Helmholtz instability, which enables to address the specific issue of the E xB drift also met in the more complex gyrokinetic-type equations. Based on these preliminary studies, the most promising methods are being implemented and tested in CYGNE.

  2. Social Media: New Spaces for Contention In Authoritarian Systems

    DTIC Science & Technology

    2015-12-01

    restrictions and controls. Social media became the manner in which protesters used to mobilize as opposed to the traditional manner of word of mouth or...distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words ) What role has social media played in Bahraini political movements...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited SOCIAL MEDIA : NEW

  3. Development of high performance particle in cell code for the exascale age

    NASA Astrophysics Data System (ADS)

    Lapenta, Giovanni; Amaya, Jorge; Gonzalez, Diego; Deep-Est H2020 Consortium Collaboration

    2017-10-01

    Magnetized plasmas are most effectively described by magneto-hydrodynamics, MHD, a fluid theory based on describing some fields defined in space: electromagnetic fields, density, velocity and temperature of the plasma. However, microphysics processes need kinetic theory, where statistical distributions of particles are governed by the Boltzmann equation. While fluid models are based on the ordinary space and time, kinetic models require a six dimensional space, called phase space, besides time. The two methods are not separated but rather interact to determine the system evolution. Arriving at a single self-consistent model is the goal of our research. We present a new approach developed with the goal of extending the reach of kinetic models to the fluid scales. Kinetic models are a higher order description and all fluid effects are included in them. However, the cost in terms of computing power is much higher and it has been so far prohibitively expensive to treat space weather events fully kinetically. We have now designed a new method capable of reducing that cost by several orders of magnitude making it possible for kinetic models to study macroscopic systems. H2020 Deep-EST consortium (European Commission).

  4. Transformations based on continuous piecewise-affine velocity fields

    DOE PAGES

    Freifeld, Oren; Hauberg, Soren; Batmanghelich, Kayhan; ...

    2017-01-11

    Here, we propose novel finite-dimensional spaces of well-behaved Rn → Rn transformations. The latter are obtained by (fast and highly-accurate) integration of continuous piecewise-affine velocity fields. The proposed method is simple yet highly expressive, effortlessly handles optional constraints (e.g., volume preservation and/or boundary conditions), and supports convenient modeling choices such as smoothing priors and coarse-to-fine analysis. Importantly, the proposed approach, partly due to its rapid likelihood evaluations and partly due to its other properties, facilitates tractable inference over rich transformation spaces, including using Markov-Chain Monte-Carlo methods. Its applications include, but are not limited to: monotonic regression (more generally, optimization overmore » monotonic functions); modeling cumulative distribution functions or histograms; time-warping; image warping; image registration; real-time diffeomorphic image editing; data augmentation for image classifiers. Our GPU-based code is publicly available.« less

  5. Transformations Based on Continuous Piecewise-Affine Velocity Fields

    PubMed Central

    Freifeld, Oren; Hauberg, Søren; Batmanghelich, Kayhan; Fisher, Jonn W.

    2018-01-01

    We propose novel finite-dimensional spaces of well-behaved ℝn → ℝn transformations. The latter are obtained by (fast and highly-accurate) integration of continuous piecewise-affine velocity fields. The proposed method is simple yet highly expressive, effortlessly handles optional constraints (e.g., volume preservation and/or boundary conditions), and supports convenient modeling choices such as smoothing priors and coarse-to-fine analysis. Importantly, the proposed approach, partly due to its rapid likelihood evaluations and partly due to its other properties, facilitates tractable inference over rich transformation spaces, including using Markov-Chain Monte-Carlo methods. Its applications include, but are not limited to: monotonic regression (more generally, optimization over monotonic functions); modeling cumulative distribution functions or histograms; time-warping; image warping; image registration; real-time diffeomorphic image editing; data augmentation for image classifiers. Our GPU-based code is publicly available. PMID:28092517

  6. Distributed computing environments for future space control systems

    NASA Technical Reports Server (NTRS)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  7. Space Communications Emulation Facility

    NASA Technical Reports Server (NTRS)

    Hill, Chante A.

    2004-01-01

    Establishing space communication between ground facilities and other satellites is a painstaking task that requires many precise calculations dealing with relay time, atmospheric conditions, and satellite positions, to name a few. The Space Communications Emulation Facility (SCEF) team here at NASA is developing a facility that will approximately emulate the conditions in space that impact space communication. The emulation facility is comprised of a 32 node distributed cluster of computers; each node representing a satellite or ground station. The objective of the satellites is to observe the topography of the Earth (water, vegetation, land, and ice) and relay this information back to the ground stations. Software originally designed by the University of Kansas, labeled the Emulation Manager, controls the interaction of the satellites and ground stations, as well as handling the recording of data. The Emulation Manager is installed on a Linux Operating System, employing both Java and C++ programming codes. The emulation scenarios are written in extensible Markup Language, XML. XML documents are designed to store, carry, and exchange data. With XML documents data can be exchanged between incompatible systems, which makes it ideal for this project because Linux, MAC and Windows Operating Systems are all used. Unfortunately, XML documents cannot display data like HTML documents. Therefore, the SCEF team uses XML Schema Definition (XSD) or just schema to describe the structure of an XML document. Schemas are very important because they have the capability to validate the correctness of data, define restrictions on data, define data formats, and convert data between different data types, among other things. At this time, in order for the Emulation Manager to open and run an XML emulation scenario file, the user must first establish a link between the schema file and the directory under which the XML scenario files are saved. This procedure takes place on the command line on the Linux Operating System. Once this link has been established the Emulation manager validates all the XML files in that directory against the schema file, before the actual scenario is run. Using some very sophisticated commercial software called the Satellite Tool Kit (STK) installed on the Linux box, the Emulation Manager is able to display the data and graphics generated by the execution of a XML emulation scenario file. The Emulation Manager software is written in JAVA programming code. Since the SCEF project is in the developmental stage, the source code for this type of software is being modified to better fit the requirements of the SCEF project. Some parameters for the emulation are hard coded, set at fixed values. Members of the SCEF team are altering the code to allow the user to choose the values of these hard coded parameters by inserting a toolbar onto the preexisting GUI.

  8. Practical Implementation of Prestack Kirchhoff Time Migration on a General Purpose Graphics Processing Unit

    NASA Astrophysics Data System (ADS)

    Liu, Guofeng; Li, Chun

    2016-08-01

    In this study, we present a practical implementation of prestack Kirchhoff time migration (PSTM) on a general purpose graphic processing unit. First, we consider the three main optimizations of the PSTM GPU code, i.e., designing a configuration based on a reasonable execution, using the texture memory for velocity interpolation, and the application of an intrinsic function in device code. This approach can achieve a speedup of nearly 45 times on a NVIDIA GTX 680 GPU compared with CPU code when a larger imaging space is used, where the PSTM output is a common reflection point that is gathered as I[ nx][ ny][ nh][ nt] in matrix format. However, this method requires more memory space so the limited imaging space cannot fully exploit the GPU sources. To overcome this problem, we designed a PSTM scheme with multi-GPUs for imaging different seismic data on different GPUs using an offset value. This process can achieve the peak speedup of GPU PSTM code and it greatly increases the efficiency of the calculations, but without changing the imaging result.

  9. The Numerical Electromagnetics Code (NEC) - A Brief History

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, G J; Miller, E K; Poggio, A J

    The Numerical Electromagnetics Code, NEC as it is commonly known, continues to be one of the more widely used antenna modeling codes in existence. With several versions in use that reflect different levels of capability and availability, there are now 450 copies of NEC4 and 250 copies of NEC3 that have been distributed by Lawrence Livermore National Laboratory to a limited class of qualified recipients, and several hundred copies of NEC2 that had a recorded distribution by LLNL. These numbers do not account for numerous copies (perhaps 1000s) that were acquired through other means capitalizing on the open source code,more » the absence of distribution controls prior to NEC3 and the availability of versions on the Internet. In this paper we briefly review the history of the code that is concisely displayed in Figure 1. We will show how it capitalized on the research of prominent contributors in the early days of computational electromagnetics, how a combination of events led to the tri-service-supported code development program that ultimately led to NEC and how it evolved to the present day product. The authors apologize that space limitations do not allow us to provide a list of references or to acknowledge the numerous contributors to the code both of which can be found in the code documents.« less

  10. Validation of the technique for absolute total electron content and differential code biases estimation

    NASA Astrophysics Data System (ADS)

    Mylnikova, Anna; Yasyukevich, Yury; Yasyukevich, Anna

    2017-04-01

    We have developed a technique for vertical total electron content (TEC) and differential code biases (DCBs) estimation using data from a single GPS/GLONASS station. The algorithm is based on TEC expansion into Taylor series in space and time (TayAbsTEC). We perform the validation of the technique using Global Ionospheric Maps (GIM) computed by Center for Orbit Determination in Europe (CODE) and Jet Propulsion Laboratory (JPL). We compared differences between absolute vertical TEC (VTEC) from GIM and VTEC evaluated by TayAbsTEC for 2009 year (solar activity minimum - sunspot number about 0), and for 2014 year (solar activity maximum - sunspot number 110). Since there is difference between VTEC from CODE and VTEC from JPL, we compare TayAbsTEC VTEC with both of them. We found that TayAbsTEC VTEC is closer to CODE VTEC than to JPL VTEC. The difference between TayAbsTEC VTEC and GIM VTEC is more noticeable for solar activity maximum (2014) than for solar activity minimum (2009) for both CODE and JPL. The distribution of VTEC differences is close to Gaussian distribution, so we conclude that results of TayAbsTEC are in the agreement with GIM VTEC. We also compared DCBs evaluated by TayAbsTEC and DCBs from GIM, computed by CODE. The TayAbsTEC DCBs are in good agreement with CODE DCBs for GPS satellites, but differ noticeable for GLONASS. We used DCBs to correct slant TEC to find out which DCBs give better results. Slant TEC correction with CODE DCBs produces negative and nonphysical TEC values. Slant TEC correction with TayAbsTEC DCBs doesn't produce such artifacts. The technique we developed is used for VTEC and DCBs calculation given only local GPS/GLONASS networks data. The evaluated VTEC data are in GIM framework which is handy when various data analyses are made.

  11. Computer aided radiation analysis for manned spacecraft

    NASA Technical Reports Server (NTRS)

    Appleby, Matthew H.; Griffin, Brand N.; Tanner, Ernest R., II; Pogue, William R.; Golightly, Michael J.

    1991-01-01

    In order to assist in the design of radiation shielding an analytical tool is presented that can be employed in combination with CAD facilities and NASA transport codes. The nature of radiation in space is described, and the operational requirements for protection are listed as background information for the use of the technique. The method is based on the Boeing radiation exposure model (BREM) for combining NASA radiation transport codes and CAD facilities, and the output is given as contour maps of the radiation-shield distribution so that dangerous areas can be identified. Computational models are used to solve the 1D Boltzmann transport equation and determine the shielding needs for the worst-case scenario. BREM can be employed directly with the radiation computations to assess radiation protection during all phases of design which saves time and ultimately spacecraft weight.

  12. Development of a calculation method for estimating specific energy distribution in complex radiation fields.

    PubMed

    Sato, Tatsuhiko; Watanabe, Ritsuko; Niita, Koji

    2006-01-01

    Estimation of the specific energy distribution in a human body exposed to complex radiation fields is of great importance in the planning of long-term space missions and heavy ion cancer therapies. With the aim of developing a tool for this estimation, the specific energy distributions in liquid water around the tracks of several HZE particles with energies up to 100 GeV n(-1) were calculated by performing track structure simulation with the Monte Carlo technique. In the simulation, the targets were assumed to be spherical sites with diameters from 1 nm to 1 microm. An analytical function to reproduce the simulation results was developed in order to predict the distributions of all kinds of heavy ions over a wide energy range. The incorporation of this function into the Particle and Heavy Ion Transport code System (PHITS) enables us to calculate the specific energy distributions in complex radiation fields in a short computational time.

  13. Probabilistic structural analysis of a truss typical for space station

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.

    1990-01-01

    A three-bay, space, cantilever truss is probabilistically evaluated using the computer code NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) to identify and quantify the uncertainties and respective sensitivities associated with corresponding uncertainties in the primitive variables (structural, material, and loads parameters) that defines the truss. The distribution of each of these primitive variables is described in terms of one of several available distributions such as the Weibull, exponential, normal, log-normal, etc. The cumulative distribution function (CDF's) for the response functions considered and sensitivities associated with the primitive variables for given response are investigated. These sensitivities help in determining the dominating primitive variables for that response.

  14. 2nd-Order CESE Results For C1.4: Vortex Transport by Uniform Flow

    NASA Technical Reports Server (NTRS)

    Friedlander, David J.

    2015-01-01

    The Conservation Element and Solution Element (CESE) method was used as implemented in the NASA research code ez4d. The CESE method is a time accurate formulation with flux-conservation in both space and time. The method treats the discretized derivatives of space and time identically and while the 2nd-order accurate version was used, high-order versions exist, the 2nd-order accurate version was used. In regards to the ez4d code, it is an unstructured Navier-Stokes solver coded in C++ with serial and parallel versions available. As part of its architecture, ez4d has the capability to utilize multi-thread and Messaging Passage Interface (MPI) for parallel runs.

  15. Multiplexed phase-space imaging for 3D fluorescence microscopy.

    PubMed

    Liu, Hsiou-Yuan; Zhong, Jingshan; Waller, Laura

    2017-06-26

    Optical phase-space functions describe spatial and angular information simultaneously; examples of optical phase-space functions include light fields in ray optics and Wigner functions in wave optics. Measurement of phase-space enables digital refocusing, aberration removal and 3D reconstruction. High-resolution capture of 4D phase-space datasets is, however, challenging. Previous scanning approaches are slow, light inefficient and do not achieve diffraction-limited resolution. Here, we propose a multiplexed method that solves these problems. We use a spatial light modulator (SLM) in the pupil plane of a microscope in order to sequentially pattern multiplexed coded apertures while capturing images in real space. Then, we reconstruct the 3D fluorescence distribution of our sample by solving an inverse problem via regularized least squares with a proximal accelerated gradient descent solver. We experimentally reconstruct a 101 Megavoxel 3D volume (1010×510×500µm with NA 0.4), demonstrating improved acquisition time, light throughput and resolution compared to scanning aperture methods. Our flexible patterning scheme further allows sparsity in the sample to be exploited for reduced data capture.

  16. The Use of a Pseudo Noise Code for DIAL Lidar

    NASA Technical Reports Server (NTRS)

    Burris, John F.

    2010-01-01

    Retrievals of CO2 profiles within the planetary boundary layer (PBL) are required to understand CO2 transport over regional scales and for validating the future space borne CO2 remote sensing instrument, such as the CO2 Laser Sounder, for the ASCENDS mission, We report the use of a return-to-zero (RZ) pseudo noise (PN) code modulation technique for making range resolved measurements of CO2 within the PBL using commercial, off-the-shelf, components. Conventional, range resolved, measurements require laser pulse widths that are s#rorter than the desired spatial resolution and have pulse spacing such that returns from only a single pulse are observed by the receiver at one time (for the PBL pulse separations must be greater than approximately 2000m). This imposes a serious limitation when using available fiber lasers because of the resulting low duty cycle (less than 0.001) and consequent low average laser output power. RZ PN code modulation enables a fiber laser to operate at much higher duty cycles (approaching 0.1) thereby more effectively utilizing the amplifier's output. This results in an increase in received counts by approximately two orders of magnitude. The approach involves employing two, back to back, CW fiber amplifiers seeded at the appropriate on and offline CO2 wavelengths (approximately 1572 nm) using distributed feedback diode lasers modulated by a PN code at rates significantly above 1 megahertz. An assessment of the technique, discussions of measurement precision and error sources as well as preliminary data will be presented.

  17. A Code of Ethics and Standards for Outer-Space Commerce

    NASA Astrophysics Data System (ADS)

    Livingston, David M.

    2002-01-01

    Now is the time to put forth an effective code of ethics for businesses in outer space. A successful code would be voluntary and would actually promote the growth of individual companies, not hinder their efforts to provide products and services. A properly designed code of ethics would ensure the development of space commerce unfettered by government-created barriers. Indeed, if the commercial space industry does not develop its own professional code of ethics, government- imposed regulations would probably be instituted. Should this occur, there is a risk that the development of off-Earth commerce would become more restricted. The code presented in this paper seeks to avoid the imposition of new barriers to space commerce as well as make new commercial space ventures easier to develop. The proposed code consists of a preamble, which underscores basic values, followed by a number of specific principles. For the most part, these principles set forth broad commitments to fairness and integrity with respect to employees, consumers, business transactions, political contributions, natural resources, off-Earth development, designated environmental protection zones, as well as relevant national and international laws. As acceptance of this code of ethics grows within the industry, general modifications will be necessary to accommodate the different types of businesses entering space commerce. This uniform applicability will help to assure that the code will not be perceived as foreign in nature, potentially restrictive, or threatening. Companies adopting this code of ethics will find less resistance to their space development plans, not only in the United States but also from nonspacefaring nations. Commercial space companies accepting and refining this code would demonstrate industry leadership and an understanding that will serve future generations living, working, and playing in space. Implementation of the code would also provide an off-Earth precedent for a modified free-market economy. With the code as a backdrop, a colonial or Wild West mentality would become less likely. Off-Earth resources would not be as susceptible to plunder and certain areas could be designated as environmental reserves for the benefit of all. Companies would find it advantageous to balance the goal of wealth maximization with ethical principles if such a strategy enhances the long-term prospects for success.

  18. Hybrid-view programming of nuclear fusion simulation code in the PGAS parallel programming language XcalableMP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsugane, Keisuke; Boku, Taisuke; Murai, Hitoshi

    Recently, the Partitioned Global Address Space (PGAS) parallel programming model has emerged as a usable distributed memory programming model. XcalableMP (XMP) is a PGAS parallel programming language that extends base languages such as C and Fortran with directives in OpenMP-like style. XMP supports a global-view model that allows programmers to define global data and to map them to a set of processors, which execute the distributed global data as a single thread. In XMP, the concept of a coarray is also employed for local-view programming. In this study, we port Gyrokinetic Toroidal Code - Princeton (GTC-P), which is a three-dimensionalmore » gyrokinetic PIC code developed at Princeton University to study the microturbulence phenomenon in magnetically confined fusion plasmas, to XMP as an example of hybrid memory model coding with the global-view and local-view programming models. In local-view programming, the coarray notation is simple and intuitive compared with Message Passing Interface (MPI) programming while the performance is comparable to that of the MPI version. Thus, because the global-view programming model is suitable for expressing the data parallelism for a field of grid space data, we implement a hybrid-view version using a global-view programming model to compute the field and a local-view programming model to compute the movement of particles. Finally, the performance is degraded by 20% compared with the original MPI version, but the hybrid-view version facilitates more natural data expression for static grid space data (in the global-view model) and dynamic particle data (in the local-view model), and it also increases the readability of the code for higher productivity.« less

  19. Hybrid-view programming of nuclear fusion simulation code in the PGAS parallel programming language XcalableMP

    DOE PAGES

    Tsugane, Keisuke; Boku, Taisuke; Murai, Hitoshi; ...

    2016-06-01

    Recently, the Partitioned Global Address Space (PGAS) parallel programming model has emerged as a usable distributed memory programming model. XcalableMP (XMP) is a PGAS parallel programming language that extends base languages such as C and Fortran with directives in OpenMP-like style. XMP supports a global-view model that allows programmers to define global data and to map them to a set of processors, which execute the distributed global data as a single thread. In XMP, the concept of a coarray is also employed for local-view programming. In this study, we port Gyrokinetic Toroidal Code - Princeton (GTC-P), which is a three-dimensionalmore » gyrokinetic PIC code developed at Princeton University to study the microturbulence phenomenon in magnetically confined fusion plasmas, to XMP as an example of hybrid memory model coding with the global-view and local-view programming models. In local-view programming, the coarray notation is simple and intuitive compared with Message Passing Interface (MPI) programming while the performance is comparable to that of the MPI version. Thus, because the global-view programming model is suitable for expressing the data parallelism for a field of grid space data, we implement a hybrid-view version using a global-view programming model to compute the field and a local-view programming model to compute the movement of particles. Finally, the performance is degraded by 20% compared with the original MPI version, but the hybrid-view version facilitates more natural data expression for static grid space data (in the global-view model) and dynamic particle data (in the local-view model), and it also increases the readability of the code for higher productivity.« less

  20. System Learning via Exploratory Data Analysis: Seeing Both the Forest and the Trees

    NASA Astrophysics Data System (ADS)

    Habash Krause, L.

    2014-12-01

    As the amount of observational Earth and Space Science data grows, so does the need for learning and employing data analysis techniques that can extract meaningful information from those data. Space-based and ground-based data sources from all over the world are used to inform Earth and Space environment models. However, with such a large amount of data comes a need to organize those data in a way such that trends within the data are easily discernible. This can be tricky due to the interaction between physical processes that lead to partial correlation of variables or multiple interacting sources of causality. With the suite of Exploratory Data Analysis (EDA) data mining codes available at MSFC, we have the capability to analyze large, complex data sets and quantitatively identify fundamentally independent effects from consequential or derived effects. We have used these techniques to examine the accuracy of ionospheric climate models with respect to trends in ionospheric parameters and space weather effects. In particular, these codes have been used to 1) Provide summary "at-a-glance" surveys of large data sets through categorization and/or evolution over time to identify trends, distribution shapes, and outliers, 2) Discern the underlying "latent" variables which share common sources of causality, and 3) Establish a new set of basis vectors by computing Empirical Orthogonal Functions (EOFs) which represent the maximum amount of variance for each principal component. Some of these techniques are easily implemented in the classroom using standard MATLAB functions, some of the more advanced applications require the statistical toolbox, and applications to unique situations require more sophisiticated levels of programming. This paper will present an overview of the range of tools available and how they might be used for a variety of time series Earth and Space Science data sets. Examples of feature recognition from both 1D and 2D (e.g. imagery) time series data sets will be presented.

  1. Optimum Boundaries of Signal-to-Noise Ratio for Adaptive Code Modulations

    DTIC Science & Technology

    2017-11-14

    1510–1521, Feb. 2015. [2]. Pursley, M. B. and Royster, T. C., “Adaptive-rate nonbinary LDPC coding for frequency - hop communications ,” IEEE...and this can cause a very narrowband noise near the center frequency during USRP signal acquisition and generation. This can cause a high BER...Final Report APPROVED FOR PUBLIC RELEASE; DISTRIBUTION IS UNLIMITED. AIR FORCE RESEARCH LABORATORY Space Vehicles Directorate 3550 Aberdeen Ave

  2. Distributed Adaptive Binary Quantization for Fast Nearest Neighbor Search.

    PubMed

    Xianglong Liu; Zhujin Li; Cheng Deng; Dacheng Tao

    2017-11-01

    Hashing has been proved an attractive technique for fast nearest neighbor search over big data. Compared with the projection based hashing methods, prototype-based ones own stronger power to generate discriminative binary codes for the data with complex intrinsic structure. However, existing prototype-based methods, such as spherical hashing and K-means hashing, still suffer from the ineffective coding that utilizes the complete binary codes in a hypercube. To address this problem, we propose an adaptive binary quantization (ABQ) method that learns a discriminative hash function with prototypes associated with small unique binary codes. Our alternating optimization adaptively discovers the prototype set and the code set of a varying size in an efficient way, which together robustly approximate the data relations. Our method can be naturally generalized to the product space for long hash codes, and enjoys the fast training linear to the number of the training data. We further devise a distributed framework for the large-scale learning, which can significantly speed up the training of ABQ in the distributed environment that has been widely deployed in many areas nowadays. The extensive experiments on four large-scale (up to 80 million) data sets demonstrate that our method significantly outperforms state-of-the-art hashing methods, with up to 58.84% performance gains relatively.

  3. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens.

    PubMed

    Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D; Volz, Kerstin

    2017-06-01

    We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. High data rate coding for the space station telemetry links.

    NASA Technical Reports Server (NTRS)

    Lumb, D. R.; Viterbi, A. J.

    1971-01-01

    Coding systems for high data rates were examined from the standpoint of potential application in space-station telemetry links. Approaches considered included convolutional codes with sequential, Viterbi, and cascaded-Viterbi decoding. It was concluded that a high-speed (40 Mbps) sequential decoding system best satisfies the requirements for the assumed growth potential and specified constraints. Trade-off studies leading to this conclusion are viewed, and some sequential (Fano) algorithm improvements are discussed, together with real-time simulation results.

  5. Computing travel time when the exact address is unknown: a comparison of point and polygon ZIP code approximation methods.

    PubMed

    Berke, Ethan M; Shi, Xun

    2009-04-29

    Travel time is an important metric of geographic access to health care. We compared strategies of estimating travel times when only subject ZIP code data were available. Using simulated data from New Hampshire and Arizona, we estimated travel times to nearest cancer centers by using: 1) geometric centroid of ZIP code polygons as origins, 2) population centroids as origin, 3) service area rings around each cancer center, assigning subjects to rings by assuming they are evenly distributed within their ZIP code, 4) service area rings around each center, assuming the subjects follow the population distribution within the ZIP code. We used travel times based on street addresses as true values to validate estimates. Population-based methods have smaller errors than geometry-based methods. Within categories (geometry or population), centroid and service area methods have similar errors. Errors are smaller in urban areas than in rural areas. Population-based methods are superior to the geometry-based methods, with the population centroid method appearing to be the best choice for estimating travel time. Estimates in rural areas are less reliable.

  6. KINETIC-J: A computational kernel for solving the linearized Vlasov equation applied to calculations of the kinetic, configuration space plasma current for time harmonic wave electric fields

    NASA Astrophysics Data System (ADS)

    Green, David L.; Berry, Lee A.; Simpson, Adam B.; Younkin, Timothy R.

    2018-04-01

    We present the KINETIC-J code, a computational kernel for evaluating the linearized Vlasov equation with application to calculating the kinetic plasma response (current) to an applied time harmonic wave electric field. This code addresses the need for a configuration space evaluation of the plasma current to enable kinetic full-wave solvers for waves in hot plasmas to move beyond the limitations of the traditional Fourier spectral methods. We benchmark the kernel via comparison with the standard k →-space forms of the hot plasma conductivity tensor.

  7. Space Network Time Distribution and Synchronization Protocol Development for Mars Proximity Link

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Gao, Jay L.; Mills, David

    2010-01-01

    Time distribution and synchronization in deep space network are challenging due to long propagation delays, spacecraft movements, and relativistic effects. Further, the Network Time Protocol (NTP) designed for terrestrial networks may not work properly in space. In this work, we consider the time distribution protocol based on time message exchanges similar to Network Time Protocol (NTP). We present the Proximity-1 Space Link Interleaved Time Synchronization (PITS) algorithm that can work with the CCSDS Proximity-1 Space Data Link Protocol. The PITS algorithm provides faster time synchronization via two-way time transfer over proximity links, improves scalability as the number of spacecraft increase, lowers storage space requirement for collecting time samples, and is robust against packet loss and duplication which underlying protocol mechanisms provide.

  8. Plume particle collection and sizing from static firing of solid rocket motors

    NASA Technical Reports Server (NTRS)

    Sambamurthi, Jay K.

    1995-01-01

    A unique dart system has been designed and built at the NASA Marshall Space Flight Center to collect aluminum oxide plume particles from the plumes of large scale solid rocket motors, such as the space shuttle RSRM. The capability of this system to collect clean samples from both the vertically fired MNASA (18.3% scaled version of the RSRM) motors and the horizontally fired RSRM motor has been demonstrated. The particle mass averaged diameters, d43, measured from the samples for the different motors, ranged from 8 to 11 mu m and were independent of the dart collection surface and the motor burn time. The measured results agreed well with those calculated using the industry standard Hermsen's correlation within the standard deviation of the correlation . For each of the samples analyzed from both MNASA and RSRM motors, the distribution of the cumulative mass fraction of the plume oxide particles as a function of the particle diameter was best described by a monomodal log-normal distribution with a standard deviation of 0.13 - 0.15. This distribution agreed well with the theoretical prediction by Salita using the OD3P code for the RSRM motor at the nozzle exit plane.

  9. Unified tensor model for space-frequency spreading-multiplexing (SFSM) MIMO communication systems

    NASA Astrophysics Data System (ADS)

    de Almeida, André LF; Favier, Gérard

    2013-12-01

    This paper presents a unified tensor model for space-frequency spreading-multiplexing (SFSM) multiple-input multiple-output (MIMO) wireless communication systems that combine space- and frequency-domain spreadings, followed by a space-frequency multiplexing. Spreading across space (transmit antennas) and frequency (subcarriers) adds resilience against deep channel fades and provides space and frequency diversities, while orthogonal space-frequency multiplexing enables multi-stream transmission. We adopt a tensor-based formulation for the proposed SFSM MIMO system that incorporates space, frequency, time, and code dimensions by means of the parallel factor model. The developed SFSM tensor model unifies the tensorial formulation of some existing multiple-access/multicarrier MIMO signaling schemes as special cases, while revealing interesting tradeoffs due to combined space, frequency, and time diversities which are of practical relevance for joint symbol-channel-code estimation. The performance of the proposed SFSM MIMO system using either a zero forcing receiver or a semi-blind tensor-based receiver is illustrated by means of computer simulation results under realistic channel and system parameters.

  10. Numerical relativity for D dimensional axially symmetric space-times: Formalism and code tests

    NASA Astrophysics Data System (ADS)

    Zilhão, Miguel; Witek, Helvi; Sperhake, Ulrich; Cardoso, Vitor; Gualtieri, Leonardo; Herdeiro, Carlos; Nerozzi, Andrea

    2010-04-01

    The numerical evolution of Einstein’s field equations in a generic background has the potential to answer a variety of important questions in physics: from applications to the gauge-gravity duality, to modeling black hole production in TeV gravity scenarios, to analysis of the stability of exact solutions, and to tests of cosmic censorship. In order to investigate these questions, we extend numerical relativity to more general space-times than those investigated hitherto, by developing a framework to study the numerical evolution of D dimensional vacuum space-times with an SO(D-2) isometry group for D≥5, or SO(D-3) for D≥6. Performing a dimensional reduction on a (D-4) sphere, the D dimensional vacuum Einstein equations are rewritten as a 3+1 dimensional system with source terms, and presented in the Baumgarte, Shapiro, Shibata, and Nakamura formulation. This allows the use of existing 3+1 dimensional numerical codes with small adaptations. Brill-Lindquist initial data are constructed in D dimensions and a procedure to match them to our 3+1 dimensional evolution equations is given. We have implemented our framework by adapting the Lean code and perform a variety of simulations of nonspinning black hole space-times. Specifically, we present a modified moving puncture gauge, which facilitates long-term stable simulations in D=5. We further demonstrate the internal consistency of the code by studying convergence and comparing numerical versus analytic results in the case of geodesic slicing for D=5, 6.

  11. Efficacy analysis of LDPC coded APSK modulated differential space-time-frequency coded for wireless body area network using MB-pulsed OFDM UWB technology.

    PubMed

    Manimegalai, C T; Gauni, Sabitha; Kalimuthu, K

    2017-12-04

    Wireless body area network (WBAN) is a breakthrough technology in healthcare areas such as hospital and telemedicine. The human body has a complex mixture of different tissues. It is expected that the nature of propagation of electromagnetic signals is distinct in each of these tissues. This forms the base for the WBAN, which is different from other environments. In this paper, the knowledge of Ultra Wide Band (UWB) channel is explored in the WBAN (IEEE 802.15.6) system. The measurements of parameters in frequency range from 3.1-10.6 GHz are taken. The proposed system, transmits data up to 480 Mbps by using LDPC coded APSK Modulated Differential Space-Time-Frequency Coded MB-OFDM to increase the throughput and power efficiency.

  12. Magnetospheric space plasma investigations

    NASA Technical Reports Server (NTRS)

    Comfort, Richard H.; Horwitz, James L.

    1994-01-01

    A time dependent semi-kinetic model that includes self collisions and ion-neutral collisions and chemistry was developed. Light ion outflow in the polar cap transition region was modeled and compared with data results. A model study of wave heating of O+ ions in the topside transition region was carried out using a code which does local calculations that include ion-neutral and Coulomb self collisions as well as production and loss of O+. Another project is a statistical study of hydrogen spin curve characteristics in the polar cap. A statistical study of the latitudinal distribution of core plasmas along the L=4.6 field line using DE-1/RIMS data was completed. A short paper on dual spacecraft estimates of ion temperature profiles and heat flows in the plasmasphere ionosphere system was prepared. An automated processing code was used to process RIMS data from 1981 to 1984.

  13. A new Bayesian Earthquake Analysis Tool (BEAT)

    NASA Astrophysics Data System (ADS)

    Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin

    2017-04-01

    Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package is build on top of the pyrocko seismological toolbox (www.pyrocko.org) and makes use of the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat) and we encourage and solicit contributions to the project. In this contribution, we present our strategy for developing BEAT, show application examples, and discuss future developments.

  14. Kranc: a Mathematica package to generate numerical codes for tensorial evolution equations

    NASA Astrophysics Data System (ADS)

    Husa, Sascha; Hinder, Ian; Lechner, Christiane

    2006-06-01

    We present a suite of Mathematica-based computer-algebra packages, termed "Kranc", which comprise a toolbox to convert certain (tensorial) systems of partial differential evolution equations to parallelized C or Fortran code for solving initial boundary value problems. Kranc can be used as a "rapid prototyping" system for physicists or mathematicians handling very complicated systems of partial differential equations, but through integration into the Cactus computational toolkit we can also produce efficient parallelized production codes. Our work is motivated by the field of numerical relativity, where Kranc is used as a research tool by the authors. In this paper we describe the design and implementation of both the Mathematica packages and the resulting code, we discuss some example applications, and provide results on the performance of an example numerical code for the Einstein equations. Program summaryTitle of program: Kranc Catalogue identifier: ADXS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXS_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computer for which the program is designed and others on which it has been tested: General computers which run Mathematica (for code generation) and Cactus (for numerical simulations), tested under Linux Programming language used: Mathematica, C, Fortran 90 Memory required to execute with typical data: This depends on the number of variables and gridsize, the included ADM example requires 4308 KB Has the code been vectorized or parallelized: The code is parallelized based on the Cactus framework. Number of bytes in distributed program, including test data, etc.: 1 578 142 Number of lines in distributed program, including test data, etc.: 11 711 Nature of physical problem: Solution of partial differential equations in three space dimensions, which are formulated as an initial value problem. In particular, the program is geared towards handling very complex tensorial equations as they appear, e.g., in numerical relativity. The worked out examples comprise the Klein-Gordon equations, the Maxwell equations, and the ADM formulation of the Einstein equations. Method of solution: The method of numerical solution is finite differencing and method of lines time integration, the numerical code is generated through a high level Mathematica interface. Restrictions on the complexity of the program: Typical numerical relativity applications will contain up to several dozen evolution variables and thousands of source terms, Cactus applications have shown scaling up to several thousand processors and grid sizes exceeding 500 3. Typical running time: This depends on the number of variables and the grid size: the included ADM example takes approximately 100 seconds on a 1600 MHz Intel Pentium M processor. Unusual features of the program: based on Mathematica and Cactus

  15. Development of Graphical User Interface for ARRBOD (Acute Radiation Risk and BRYNTRN Organ Dose Projection)

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee; Hu, Shaowen; Nounu, Hatem N.; Cucinotta, Francis A.

    2010-01-01

    The space radiation environment, particularly solar particle events (SPEs), poses the risk of acute radiation sickness (ARS) to humans; and organ doses from SPE exposure may reach critical levels during extra vehicular activities (EVAs) or within lightly shielded spacecraft. NASA has developed an organ dose projection model using the BRYNTRN with SUMDOSE computer codes, and a probabilistic model of Acute Radiation Risk (ARR). The codes BRYNTRN and SUMDOSE, written in FORTRAN, are a Baryon transport code and an output data processing code, respectively. The ARR code is written in C. The risk projection models of organ doses and ARR take the output from BRYNTRN as an input to their calculations. BRYNTRN code operation requires extensive input preparation. With a graphical user interface (GUI) to handle input and output for BRYNTRN, the response models can be connected easily and correctly to BRYNTRN in friendly way. A GUI for the Acute Radiation Risk and BRYNTRN Organ Dose (ARRBOD) projection code provides seamless integration of input and output manipulations, which are required for operations of the ARRBOD modules: BRYNTRN, SUMDOSE, and the ARR probabilistic response model. The ARRBOD GUI is intended for mission planners, radiation shield designers, space operations in the mission operations directorate (MOD), and space biophysics researchers. The ARRBOD GUI will serve as a proof-of-concept example for future integration of other human space applications risk projection models. The current version of the ARRBOD GUI is a new self-contained product and will have follow-on versions, as options are added: 1) human geometries of MAX/FAX in addition to CAM/CAF; 2) shielding distributions for spacecraft, Mars surface and atmosphere; 3) various space environmental and biophysical models; and 4) other response models to be connected to the BRYNTRN. The major components of the overall system, the subsystem interconnections, and external interfaces are described in this report; and the ARRBOD GUI product is explained step by step in order to serve as a tutorial.

  16. Aero-thermo-dynamic analysis of the Spaceliner-7.1 vehicle in high altitude flight

    NASA Astrophysics Data System (ADS)

    Zuppardi, Gennaro; Morsa, Luigi; Sippel, Martin; Schwanekamp, Tobias

    2014-12-01

    SpaceLiner, designed by DLR, is a visionary, extremely fast passenger transportation concept. It consists of two stages: a winged booster, a vehicle. After separation of the two stages, the booster makes a controlled re-entry and returns to the launch site. According to the current project, version 7-1 of SpaceLiner (SpaceLiner-7.1), the vehicle should be brought at an altitude of 75 km and then released, undertaking the descent path. In the perspective that the vehicle of SpaceLiner-7.1 could be brought to altitudes higher than 75 km, e.g. 100 km or above and also for a speculative purpose, in this paper the aerodynamic parameters of the SpaceLiner-7.1 vehicle are calculated in the whole transition regime, from continuum low density to free molecular flows. Computer simulations have been carried out by three codes: two DSMC codes, DS3V in the altitude interval 100-250 km for the evaluation of the global aerodynamic coefficients and DS2V at the altitude of 60 km for the evaluation of the heat flux and pressure distributions along the vehicle nose, and the DLR HOTSOSE code for the evaluation of the global aerodynamic coefficients in continuum, hypersonic flow at the altitude of 44.6 km. The effectiveness of the flaps with deflection angle of -35 deg. was evaluated in the above mentioned altitude interval. The vehicle showed longitudinal stability in the whole altitude interval even with no flap. The global bridging formulae verified to be proper for the evaluation of the aerodynamic coefficients in the altitude interval 80-100 km where the computations cannot be fulfilled either by CFD, because of the failure of the classical equations computing the transport coefficients, or by DSMC because of the requirement of very high computer resources both in terms of the core storage (a high number of simulated molecules is needed) and to the very long processing time.

  17. Using Kokkos for Performant Cross-Platform Acceleration of Liquid Rocket Simulations

    DTIC Science & Technology

    2017-05-08

    NUMBER (Include area code) 08 May 2017 Briefing Charts 05 April 2017 - 08 May 2017 Using Kokkos for Performant Cross-Platform Acceleration of Liquid ...ERC Incorporated RQRC AFRL-West Using Kokkos for Performant Cross-Platform Acceleration of Liquid Rocket Simulations 2DISTRIBUTION A: Approved for... Liquid Rocket Combustion Simulation SPACE simulation of rotating detonation engine (courtesy of Dr. Christopher Lietz) 3DISTRIBUTION A: Approved

  18. Impact of velocity space distribution on hybrid kinetic-magnetohydrodynamic simulation of the (1,1) mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Charlson C.

    2008-07-15

    Numeric studies of the impact of the velocity space distribution on the stabilization of (1,1) internal kink mode and excitation of the fishbone mode are performed with a hybrid kinetic-magnetohydrodynamic model. These simulations demonstrate an extension of the physics capabilities of NIMROD[C. R. Sovinec et al., J. Comput. Phys. 195, 355 (2004)], a three-dimensional extended magnetohydrodynamic (MHD) code, to include the kinetic effects of an energetic minority ion species. Kinetic effects are captured by a modification of the usual MHD momentum equation to include a pressure tensor calculated from the {delta}f particle-in-cell method [S. E. Parker and W. W. Lee,more » Phys. Fluids B 5, 77 (1993)]. The particles are advanced in the self-consistent NIMROD fields. We outline the implementation and present simulation results of energetic minority ion stabilization of the (1,1) internal kink mode and excitation of the fishbone mode. A benchmark of the linear growth rate and real frequency is shown to agree well with another code. The impact of the details of the velocity space distribution is examined; particularly extending the velocity space cutoff of the simulation particles. Modestly increasing the cutoff strongly impacts the (1,1) mode. Numeric experiments are performed to study the impact of passing versus trapped particles. Observations of these numeric experiments suggest that assumptions of energetic particle effects should be re-examined.« less

  19. An NTP Stratum-One Server Farm Fed By IEEE-1588

    DTIC Science & Technology

    2010-01-01

    Serial Time Code Formats,” U.S. Army White Sands Missile Range, N.M. [11] J. Eidson , 2005, “IEEE-1588 Standard for a Precision Clock Synchronization ... synchronized to its Master Clocks via IRIG-B time code on a low- frequency RF distribution system. The availability of Precise Time Protocol (PTP, IEEE...forwarding back to the requestor. The farm NTP servers are synchronized to the USNO Master Clocks using IRIG-B time code. The current standard NTP

  20. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    PubMed

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  1. Low altitude, one centimeter, space debris search at Lincoln Laboratory's (M.I.T.) experimental test system

    NASA Technical Reports Server (NTRS)

    Taff, L. G.; Beatty, D. E.; Yakutis, A. J.; Randall, P. M. S.

    1985-01-01

    The majority of work performed by the Lincoln Laboratory's Space Surveillance Group, at the request of NASA, to define the near-earth population of man-made debris is summarized. Electrooptical devices, each with a 1.2 deg FOV, were employed at the GEODSS facility in New Mexico. Details of the equipment calibration and alignment procedures are discussed, together with implementation of a synchronized time code for computer controlled videotaping of the imagery. Parallax and angular speed data served as bases for distinguishing between man-made debris and meteoroids. The best visibility was obtained in dawn and dusk twilight conditions at elevation ranges of 300-2000 km. Tables are provided of altitudinal density distribution of debris. It is noted that the program also yielded an extensive data base on meteoroid rates.

  2. Spot: A Programming Language for Verified Flight Software

    NASA Technical Reports Server (NTRS)

    Bocchino, Robert L., Jr.; Gamble, Edward; Gostelow, Kim P.; Some, Raphael R.

    2014-01-01

    The C programming language is widely used for programming space flight software and other safety-critical real time systems. C, however, is far from ideal for this purpose: as is well known, it is both low-level and unsafe. This paper describes Spot, a language derived from C for programming space flight systems. Spot aims to maintain compatibility with existing C code while improving the language and supporting verification with the SPIN model checker. The major features of Spot include actor-based concurrency, distributed state with message passing and transactional updates, and annotations for testing and verification. Spot also supports domain-specific annotations for managing spacecraft state, e.g., communicating telemetry information to the ground. We describe the motivation and design rationale for Spot, give an overview of the design, provide examples of Spot's capabilities, and discuss the current status of the implementation.

  3. Preliminary 2-D shell analysis of the space shuttle solid rocket boosters

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Gillian, Ronnie E.; Nemeth, Michael P.

    1987-01-01

    A two-dimensional shell model of an entire solid rocket booster (SRB) has been developed using the STAGSC-1 computer code and executed on the Ames CRAY computer. The purpose of these analyses is to calculate the overall deflection and stress distributions for the SRB when subjected to mechanical loads corresponding to critical times during the launch sequence. The mechanical loading conditions for the full SRB arise from the external tank (ET) attachment points, the solid rocket motor (SRM) pressure load, and the SRB hold down posts. The ET strut loads vary with time after the Space Shuttle main engine (SSME) ignition. The SRM internal pressure varies axially by approximately 100 psi. Static analyses of the full SRB are performed using a snapshot picture of the loads. The field and factory joints are modeled by using equivalent stiffness joints instead of detailed models of the joint. As such, local joint behavior cannot be obtained from this global model.

  4. A Simulation Testbed for Adaptive Modulation and Coding in Airborne Telemetry (Brief)

    DTIC Science & Technology

    2014-10-01

    SOQPSK 0.0085924 us 0.015231 kH2 10 1/2 20 Time Modulation/ Coding State ... .. . . D - 2/3 3/4 4/5 GTRI_B-‹#› MATLAB GUI Interface 8...802.11a) • Modulations: BPSK, QPSK, 16 QAM, 64 QAM • Cyclic Prefix Lengths • Number of Subcarriers • Coding • LDPC • Rates: 1/2, 2/3, 3/4, 4/5...and Coding in Airborne Telemetry (Brief) October 2014 DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited. Test

  5. WARP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergmann, Ryan M.; Rowland, Kelly L.

    2017-04-12

    WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed at UC Berkeley to efficiently execute on NVIDIA graphics processing unit (GPU) platforms. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo method, namely, that very few physical and geometrical simplifications are applied. WARP is able to calculate multiplication factors, neutron flux distributions (in both space and energy), and fission source distributions for time-independent neutron transport problems. It can run in both criticality or fixed source modes, but fixed source mode is currentlymore » not robust, optimized, or maintained in the newest version. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. The goal of developing WARP is to investigate algorithms that can grow into a full-featured, continuous energy, Monte Carlo neutron transport code that is accelerated by running on GPUs. The crux of the effort is to make Monte Carlo calculations faster while producing accurate results. Modern supercomputers are commonly being built with GPU coprocessor cards in their nodes to increase their computational efficiency and performance. GPUs execute efficiently on data-parallel problems, but most CPU codes, including those for Monte Carlo neutral particle transport, are predominantly task-parallel. WARP uses a data-parallel neutron transport algorithm to take advantage of the computing power GPUs offer.« less

  6. Context-aware and locality-constrained coding for image categorization.

    PubMed

    Xiao, Wenhua; Wang, Bin; Liu, Yu; Bao, Weidong; Zhang, Maojun

    2014-01-01

    Improving the coding strategy for BOF (Bag-of-Features) based feature design has drawn increasing attention in recent image categorization works. However, the ambiguity in coding procedure still impedes its further development. In this paper, we introduce a context-aware and locality-constrained Coding (CALC) approach with context information for describing objects in a discriminative way. It is generally achieved by learning a word-to-word cooccurrence prior to imposing context information over locality-constrained coding. Firstly, the local context of each category is evaluated by learning a word-to-word cooccurrence matrix representing the spatial distribution of local features in neighbor region. Then, the learned cooccurrence matrix is used for measuring the context distance between local features and code words. Finally, a coding strategy simultaneously considers locality in feature space and context space, while introducing the weight of feature is proposed. This novel coding strategy not only semantically preserves the information in coding, but also has the ability to alleviate the noise distortion of each class. Extensive experiments on several available datasets (Scene-15, Caltech101, and Caltech256) are conducted to validate the superiority of our algorithm by comparing it with baselines and recent published methods. Experimental results show that our method significantly improves the performance of baselines and achieves comparable and even better performance with the state of the arts.

  7. Numerical Propulsion System Simulation: A Common Tool for Aerospace Propulsion Being Developed

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.; Naiman, Cynthia G.

    2001-01-01

    The NASA Glenn Research Center is developing an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). This simulation is initially being used to support aeropropulsion in the analysis and design of aircraft engines. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the Aviation Safety Program and Advanced Space Transportation. NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes using the Common Object Request Broker Architecture (CORBA) in the NPSS Developer's Kit to facilitate collaborative engineering. The NPSS Developer's Kit will provide the tools to develop custom components and to use the CORBA capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities will extend NPSS from a zero-dimensional simulation tool to a multifidelity, multidiscipline system-level simulation tool for the full life cycle of an engine.

  8. Information trade-offs for optical quantum communication.

    PubMed

    Wilde, Mark M; Hayden, Patrick; Guha, Saikat

    2012-04-06

    Recent work has precisely characterized the achievable trade-offs between three key information processing tasks-classical communication (generation or consumption), quantum communication (generation or consumption), and shared entanglement (distribution or consumption), measured in bits, qubits, and ebits per channel use, respectively. Slices and corner points of this three-dimensional region reduce to well-known protocols for quantum channels. A trade-off coding technique can attain any point in the region and can outperform time sharing between the best-known protocols for accomplishing each information processing task by itself. Previously, the benefits of trade-off coding that had been found were too small to be of practical value (viz., for the dephasing and the universal cloning machine channels). In this Letter, we demonstrate that the associated performance gains are in fact remarkably high for several physically relevant bosonic channels that model free-space or fiber-optic links, thermal-noise channels, and amplifiers. We show that significant performance gains from trade-off coding also apply when trading photon-number resources between transmitting public and private classical information simultaneously over secret-key-assisted bosonic channels. © 2012 American Physical Society

  9. Efficient File Sharing by Multicast - P2P Protocol Using Network Coding and Rank Based Peer Selection

    NASA Technical Reports Server (NTRS)

    Stoenescu, Tudor M.; Woo, Simon S.

    2009-01-01

    In this work, we consider information dissemination and sharing in a distributed peer-to-peer (P2P highly dynamic communication network. In particular, we explore a network coding technique for transmission and a rank based peer selection method for network formation. The combined approach has been shown to improve information sharing and delivery to all users when considering the challenges imposed by the space network environments.

  10. The CCSDS Next Generation Space Data Link Protocol (NGSLP)

    NASA Technical Reports Server (NTRS)

    Kazz, Greg J.; Greenberg, Edward

    2014-01-01

    The CCSDS space link protocols i.e., Telemetry (TM), Telecommand (TC), Advanced Orbiting Systems (AOS) were developed in the early growth period of the space program. They were designed to meet the needs of the early missions, be compatible with the available technology and focused on the specific link environments. Digital technology was in its infancy and spacecraft power and mass issues enforced severe constraints on flight implementations. Therefore the Telecommand protocol was designed around a simple Bose, Hocquenghem, Chaudhuri (BCH) code that provided little coding gain and limited error detection but was relatively simple to decode on board. The infusion of the concatenated Convolutional and Reed-Solomon codes5 for telemetry was a major milestone and transformed telemetry applications by providing them the ability to more efficiently utilize the telemetry link and its ability to deliver user data. The ability to significantly lower the error rates on the telemetry links enabled the use of packet telemetry and data compression. The infusion of the high performance codes for telemetry was enabled by the advent of digital processing, but it was limited to earth based systems supporting telemetry. The latest CCSDS space link protocol, Proximity-1 was developed in early 2000 to meet the needs of short-range, bi-directional, fixed or mobile radio links characterized by short time delays, moderate but not weak signals, and short independent sessions. Proximity-1 has been successfully deployed on both NASA and ESA missions at Mars and is planned to be utilized by all Mars missions in development. A new age has arisen, one that now provides the means to perform advanced digital processing in spacecraft systems enabling the use of improved transponders, digital correlators, and high performance forward error correcting codes for all communications links. Flight transponders utilizing digital technology have emerged and can efficiently provide the means to make the next leap in performance for space link communications. Field Programmable Gate Arrays (FPGAs) provide the capability to incorporate high performance forward error correcting codes implemented within software transponders providing improved performance in data transfer, ranging, link security, and time correlation. Given these synergistic technological breakthroughs, the time has come to take advantage of them in applying them to both on going (e.g., command, telemetry) and emerging (e.g., space link security, optical communication) space link applications. However one of the constraining factors within the Data Link Layer in realizing these performance gains is the lack of a generic transfer frame format and common supporting services amongst the existing CCSDS link layer protocols. Currently each of the four CCSDS link layer protocols (TM, TC, AOS, and Proximity-1) have unique formats and services which prohibits their reuse across the totality of all space link applications of CCSDS member space agencies. For example, Mars missions. These missions implement their proximity data link layer using the Proximity-1 frame format and the services it supports but is still required to support the direct from Earth (TC) protocols and the Direct To Earth (AOS/TM) protocols. The prime purpose of this paper, is to describe a new general purpose CCSDS Data Link layer protocol, the NGSLP that will provide the required services along with a common transfer frame format for all the CCSDS space links (ground to/from space and space to space links) targeted for emerging missions after a CCSDS agency-wide coordinated date. This paper will also describe related options that can be included for the Coding and Synchronization sub-layer of the Data Link layer to extend the capacities of the link and additionally provide an independence of the transfer frame sub-layer from the coding sublayer. This feature will provide missions the option of running either the currently performed synchronous coding and transfer frame data link or an asynchronous coding/frame data link, in which the transfer frame length is independent of the block size of the code. The benefits from the elimination of this constraint (frame synchronized to the code block) will simplify the interface between the transponder and the data handling equipment and reduce implementation costs and complexities. The benefits include: inclusion of encoders/decoders into transmitters and receivers without regard to data link protocols, providing the ability to insert latency sensitive messages into the link to support launch, landing/docking, telerobotics. and Variable Coded Modulation (VCM). In addition the ability to transfer different sized frames can provide a backup for delivering stored anomaly engineering data simultaneously with real time data, or relaying of frames from various sources onto a trunk line for delivery to Earth.

  11. The mathematical theory of signal processing and compression-designs

    NASA Astrophysics Data System (ADS)

    Feria, Erlan H.

    2006-05-01

    The mathematical theory of signal processing, named processor coding, will be shown to inherently arise as the computational time dual of Shannon's mathematical theory of communication which is also known as source coding. Source coding is concerned with signal source memory space compression while processor coding deals with signal processor computational time compression. Their combination is named compression-designs and referred as Conde in short. A compelling and pedagogically appealing diagram will be discussed highlighting Conde's remarkable successful application to real-world knowledge-aided (KA) airborne moving target indicator (AMTI) radar.

  12. Information-Theoretic Uncertainty of SCFG-Modeled Folding Space of The Non-coding RNA

    PubMed Central

    Manzourolajdad, Amirhossein; Wang, Yingfeng; Shaw, Timothy I.; Malmberg, Russell L.

    2012-01-01

    RNA secondary structure ensembles define probability distributions for alternative equilibrium secondary structures of an RNA sequence. Shannon’s Entropy is a measure for the amount of diversity present in any ensemble. In this work, Shannon’s entropy of the SCFG ensemble on an RNA sequence is derived and implemented in polynomial time for both structurally ambiguous and unambiguous grammars. Micro RNA sequences generally have low folding entropy, as previously discovered. Surprisingly, signs of significantly high folding entropy were observed in certain ncRNA families. More effective models coupled with targeted randomization tests can lead to a better insight into folding features of these families. PMID:23160142

  13. OpenRBC: Redefining the Frontier of Red Blood Cell Simulations at Protein Resolution

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Lu, Lu; Li, He; Grinberg, Leopold; Sachdeva, Vipin; Evangelinos, Constantinos; Karniadakis, George

    We present a from-scratch development of OpenRBC, a coarse-grained molecular dynamics code, which is capable of performing an unprecedented in silico experiment - simulating an entire mammal red blood cell lipid bilayer and cytoskeleton modeled by 4 million mesoscopic particles - on a single shared memory node. To achieve this, we invented an adaptive spatial searching algorithm to accelerate the computation of short-range pairwise interactions in an extremely sparse 3D space. The algorithm is based on a Voronoi partitioning of the point cloud of coarse-grained particles, and is continuously updated over the course of the simulation. The algorithm enables the construction of a lattice-free cell list, i.e. the key spatial searching data structure in our code, in O (N) time and space space with cells whose position and shape adapts automatically to the local density and curvature. The code implements NUMA/NUCA-aware OpenMP parallelization and achieves perfect scaling with up to hundreds of hardware threads. The code outperforms a legacy solver by more than 8 times in time-to-solution and more than 20 times in problem size, thus providing a new venue for probing the cytomechanics of red blood cells. This work was supported by the Department of Energy (DOE) Collaboratory on Mathematics for Mesoscopic Model- ing of Materials (CM4). YHT acknowledges partial financial support from an IBM Ph.D. Scholarship Award.

  14. A Conference on Spacecraft Charging Technology - 1978, held at U.S. Air Force Academy, Colorado Springs, Colorado, October 31 - November 2, 1978.

    DTIC Science & Technology

    1978-01-01

    complex, applications of the code . NASCAP CODE DESCRIPTION The NASCAP code is a finite-element spacecraft-charging simulation that is written in FORTRAN ...transport code POEM (ref. 1), is applicable to arbitrary dielectrics, source spectra, and current time histories. The code calculations are illustrated by...iaxk ’. Vlbouced _DstributionL- 9TNA Availability Codes %ELECTEf Nationa Aeronautics and Dist. Spec al TAvalland/or. MAY 2 21980 Space Administration

  15. Reference results for time-like evolution up to

    NASA Astrophysics Data System (ADS)

    Bertone, Valerio; Carrazza, Stefano; Nocera, Emanuele R.

    2015-03-01

    We present high-precision numerical results for time-like Dokshitzer-Gribov-Lipatov-Altarelli-Parisi evolution in the factorisation scheme, for the first time up to next-to-next-to-leading order accuracy in quantum chromodynamics. First, we scrutinise the analytical expressions of the splitting functions available in the literature, in both x and N space, and check their mutual consistency. Second, we implement time-like evolution in two publicly available, entirely independent and conceptually different numerical codes, in x and N space respectively: the already existing APFEL code, which has been updated with time-like evolution, and the new MELA code, which has been specifically developed to perform the study in this work. Third, by means of a model for fragmentation functions, we provide results for the evolution in different factorisation schemes, for different ratios between renormalisation and factorisation scales and at different final scales. Our results are collected in the format of benchmark tables, which could be used as a reference for global determinations of fragmentation functions in the future.

  16. Hybrid Raman/Brillouin-optical-time-domain-analysis-distributed optical fiber sensors based on cyclic pulse coding.

    PubMed

    Taki, M; Signorini, A; Oton, C J; Nannipieri, T; Di Pasquale, F

    2013-10-15

    We experimentally demonstrate the use of cyclic pulse coding for distributed strain and temperature measurements in hybrid Raman/Brillouin optical time-domain analysis (BOTDA) optical fiber sensors. The highly integrated proposed solution effectively addresses the strain/temperature cross-sensitivity issue affecting standard BOTDA sensors, allowing for simultaneous meter-scale strain and temperature measurements over 10 km of standard single mode fiber using a single narrowband laser source only.

  17. A finite-volume ELLAM for three-dimensional solute-transport modeling

    USGS Publications Warehouse

    Russell, T.F.; Heberton, C.I.; Konikow, Leonard F.; Hornberger, G.Z.

    2003-01-01

    A three-dimensional finite-volume ELLAM method has been developed, tested, and successfully implemented as part of the U.S. Geological Survey (USGS) MODFLOW-2000 ground water modeling package. It is included as a solver option for the Ground Water Transport process. The FVELLAM uses space-time finite volumes oriented along the streamlines of the flow field to solve an integral form of the solute-transport equation, thus combining local and global mass conservation with the advantages of Eulerian-Lagrangian characteristic methods. The USGS FVELLAM code simulates solute transport in flowing ground water for a single dissolved solute constituent and represents the processes of advective transport, hydrodynamic dispersion, mixing from fluid sources, retardation, and decay. Implicit time discretization of the dispersive and source/sink terms is combined with a Lagrangian treatment of advection, in which forward tracking moves mass to the new time level, distributing mass among destination cells using approximate indicator functions. This allows the use of large transport time increments (large Courant numbers) with accurate results, even for advection-dominated systems (large Peclet numbers). Four test cases, including comparisons with analytical solutions and benchmarking against other numerical codes, are presented that indicate that the FVELLAM can usually yield excellent results, even if relatively few transport time steps are used, although the quality of the results is problem-dependent.

  18. Parallel Monte Carlo transport modeling in the context of a time-dependent, three-dimensional multi-physics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Procassini, R.J.

    1997-12-31

    The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less

  19. One ring to rule them all: storm time ring current and its influence on radiation belts, plasmasphere and global magnetosphere electrodynamics

    NASA Astrophysics Data System (ADS)

    Buzulukova, Natalia; Fok, Mei-Ching; Glocer, Alex; Moore, Thomas E.

    2013-04-01

    We report studies of the storm time ring current and its influence on the radiation belts, plasmasphere and global magnetospheric dynamics. The near-Earth space environment is described by multiscale physics that reflects a variety of processes and conditions that occur in magnetospheric plasma. For a successful description of such a plasma, a complex solution is needed which allows multiple physics domains to be described using multiple physical models. A key population of the inner magnetosphere is ring current plasma. Ring current dynamics affects magnetic and electric fields in the entire magnetosphere, the distribution of cold ionospheric plasma (plasmasphere), and radiation belts particles. To study electrodynamics of the inner magnetosphere, we present a MHD model (BATSRUS code) coupled with ionospheric solver for electric field and with ring current-radiation belt model (CIMI code). The model will be used as a tool to reveal details of coupling between different regions of the Earth's magnetosphere. A model validation will be also presented based on comparison with data from THEMIS, POLAR, GOES, and TWINS missions. INVITED TALK

  20. PCN-index derivation at World Data Center for Geomagnetism, Copenhagen, DTU Space

    NASA Astrophysics Data System (ADS)

    Stolle, C.; Matzka, J.

    2012-04-01

    The Polar Cap North (PCN) index is based on a correlation between geomagnetic disturbances at the Qaanaaq geomagnetic observatory (IAGA code THL) and the merging electric field derived from solar wind parameters. The index is therefore meant to provide a fast ground based single station indicator for variations in the merging electric field without being dependent on satellite observations. The PC index will be subject to an IAGA endorsement process during IAGA Scientific Assembly 2013. Actually the WDC provides near real time PC-indices and post-processed final PC-indices based on former developed algorithms. However, the coefficients used for calculating the PCN distributed by the WDC Copenhagen are presently not reproducible. In the frame of the IAGA endorsement, DTU Space tests new coefficients mainly based on published algorithms. This presentation will report on activities at the WDC Copenhagen and on the current status at DTU Space with respect to the preparation for the IAGA endorsement process of the PCN-index.

  1. Cryptographic robustness of a quantum cryptography system using phase-time coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molotkov, S. N.

    2008-01-15

    A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In themore » absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.« less

  2. Engineering a Live UHD Program from the International Space Station

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney; George, Sandy

    2017-01-01

    The first-ever live downlink of Ultra-High Definition (UHD) video from the International Space Station (ISS) was the highlight of a “Super Session” at the National Association of Broadcasters (NAB) Show in April 2017. Ultra-High Definition is four times the resolution of “full HD” or “1080P” video. Also referred to as “4K”, the Ultra-High Definition video downlink from the ISS all the way to the Las Vegas Convention Center required considerable planning, pushed the limits of conventional video distribution from a space-craft, and was the first use of High Efficiency Video Coding (HEVC) from a space-craft. The live event at NAB will serve as a pathfinder for more routine downlinks of UHD as well as use of HEVC for conventional HD downlinks to save bandwidth. A similar demonstration was conducted in 2006 with the Discovery Channel to demonstrate the ability to stream HDTV from the ISS. This paper will describe the overall work flow and routing of the UHD video, how audio was synchronized even though the video and audio were received many seconds apart from each other, and how the demonstration paves the way for not only more efficient video distribution from the ISS, but also serves as a pathfinder for more complex video distribution from deep space. The paper will also describe how a “live” event was staged when the UHD video coming from the ISS had a latency of 10+ seconds. In addition, the paper will touch on the unique collaboration between the inherently governmental aspects of the ISS, commercial partners Amazon and Elemental, and the National Association of Broadcasters.

  3. Proceedings of the Spacecraft Charging Technology Conference Held in Monterey, California on 31 October - 3 November 1989. Volume 2

    DTIC Science & Technology

    1989-11-01

    STATEMENT 12b. DISTRIBUTION CODE Approved for public release; distribution unlimited 13. ABSTRACT (Maximum 200 words) The Spacecraft Charging... Distribution I D Availability Codes ’ Avail and/orDist Special VIvt PREFACE The Spacecraft Charging Technology conference was held at the Naval... distribution , the spacecraft will charge negatively during this time according to dV/dt = 47ta 2 Jth ev/° / C whose solution is V/0= - ln(l + t/t) "t = C 0

  4. Fully non-linear multi-species Fokker-Planck-Landau collisions for gyrokinetic particle-in-cell simulations of fusion plasma

    NASA Astrophysics Data System (ADS)

    Hager, Robert; Yoon, E. S.; Ku, S.; D'Azevedo, E. F.; Worley, P. H.; Chang, C. S.

    2015-11-01

    We describe the implementation, and application of a time-dependent, fully nonlinear multi-species Fokker-Planck-Landau collision operator based on the single-species work of Yoon and Chang [Phys. Plasmas 21, 032503 (2014)] in the full-function gyrokinetic particle-in-cell codes XGC1 [Ku et al., Nucl. Fusion 49, 115021 (2009)] and XGCa. XGC simulations include the pedestal and scrape-off layer, where significant deviations of the particle distribution function from a Maxwellian can occur. Thus, in order to describe collisional effects on neoclassical and turbulence physics accurately, the use of a non-linear collision operator is a necessity. Our collision operator is based on a finite volume method using the velocity-space distribution functions sampled from the marker particles. Since the same fine configuration space mesh is used for collisions and the Poisson solver, the workload due to collisions can be comparable to or larger than the workload due to particle motion. We demonstrate that computing time spent on collisions can be kept affordable by applying advanced parallelization strategies while conserving mass, momentum, and energy to reasonable accuracy. We also show results of production scale XGCa simulations in the H-mode pedestal and compare to conventional theory. Work supported by US DOE OFES and OASCR.

  5. Evolving non-thermal electrons in simulations of black hole accretion

    NASA Astrophysics Data System (ADS)

    Chael, Andrew A.; Narayan, Ramesh; Saḑowski, Aleksander

    2017-09-01

    Current simulations of hot accretion flows around black holes assume either a single-temperature gas or, at best, a two-temperature gas with thermal ions and electrons. However, processes like magnetic reconnection and shocks can accelerate electrons into a non-thermal distribution, which will not quickly thermalize at the very low densities found in many systems. Such non-thermal electrons have been invoked to explain the infrared and X-ray spectra and strong variability of Sagittarius A* (Sgr A*), the black hole at the Galactic Center. We present a method for self-consistent evolution of a non-thermal electron population in the general relativistic magnetohydrodynamic code koral. The electron distribution is tracked across Lorentz factor space and is evolved in space and time, in parallel with thermal electrons, thermal ions and radiation. In this study, for simplicity, energy injection into the non-thermal distribution is taken as a fixed fraction of the local electron viscous heating rate. Numerical results are presented for a model with a low mass accretion rate similar to that of Sgr A*. We find that the presence of a non-thermal population of electrons has negligible effect on the overall dynamics of the system. Due to our simple uniform particle injection prescription, the radiative power in the non-thermal simulation is enhanced at large radii. The energy distribution of the non-thermal electrons shows a synchrotron cooling break, with the break Lorentz factor varying with location and time, reflecting the complex interplay between the local viscous heating rate, magnetic field strength and fluid velocity.

  6. Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach

    NASA Astrophysics Data System (ADS)

    Billman, Caleb; Gonthier, P. L.; Harding, A. K.

    2012-01-01

    We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.

  7. A resistive magnetohydrodynamics solver using modern C++ and the Boost library

    NASA Astrophysics Data System (ADS)

    Einkemmer, Lukas

    2016-09-01

    In this paper we describe the implementation of our C++ resistive magnetohydrodynamics solver. The framework developed facilitates the separation of the code implementing the specific numerical method and the physical model from the handling of boundary conditions and the management of the computational domain. In particular, this will allow us to use finite difference stencils which are only defined in the interior of the domain (the boundary conditions are handled automatically). We will discuss this and other design considerations and their impact on performance in some detail. In addition, we provide a documentation of the code developed and demonstrate that a performance comparable to Fortran can be achieved, while still maintaining a maximum of code readability and extensibility. Catalogue identifier: AFAH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFAH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 592774 No. of bytes in distributed program, including test data, etc.: 43771395 Distribution format: tar.gz Programming language: C++03. Computer: PC, HPC systems. Operating system: POSIX compatible (extensively tested on various Linux systems). In fact only the timing class requires POSIX routines; all other parts of the program can be run on any system where a C++ compiler, Boost, CVODE, and an implementation of BLAS are available. RAM: Hundredths of Kilobytes to Gigabytes (depending on the problem size) Classification: 19.10, 4.3. External routines: Boost, CVODE, either a BLAS library or Intel MKL Nature of problem: An approximate solution to the equations of resistive magnetohydrodynamics for a given initial value and given boundary conditions is computed. Solution method: The discretization is performed using a finite difference approximation in space and the CVODE library in time (which employs a scheme based on the backward differentiation formulas). Restrictions: We consider the 2.5 dimensional case; that is, the magnetic field and the velocity field are three dimensional but all quantities depend only on x and y (but not z). Unusual features: We provide an implementation in C++ using the Boost library that combines high level techniques (which greatly increases code maintainability and extensibility) with performance that is comparable to Fortran implementations. Running time: From seconds to weeks (depending on the problem size).

  8. Infrastructure Upgrades to Support Model Longevity and New Applications: The Variable Infiltration Capacity Model Version 5.0 (VIC 5.0)

    NASA Astrophysics Data System (ADS)

    Nijssen, B.; Hamman, J.; Bohn, T. J.

    2015-12-01

    The Variable Infiltration Capacity (VIC) model is a macro-scale semi-distributed hydrologic model. VIC development began in the early 1990s and it has been used extensively, applied from basin to global scales. VIC has been applied in a many use cases, including the construction of hydrologic data sets, trend analysis, data evaluation and assimilation, forecasting, coupled climate modeling, and climate change impact analysis. Ongoing applications of the VIC model include the University of Washington's drought monitor and forecast systems, and NASA's land data assimilation systems. The development of VIC version 5.0 focused on reconfiguring the legacy VIC source code to support a wider range of modern modeling applications. The VIC source code has been moved to a public Github repository to encourage participation by the model development community-at-large. The reconfiguration has separated the physical core of the model from the driver, which is responsible for memory allocation, pre- and post-processing and I/O. VIC 5.0 includes four drivers that use the same physical model core: classic, image, CESM, and Python. The classic driver supports legacy VIC configurations and runs in the traditional time-before-space configuration. The image driver includes a space-before-time configuration, netCDF I/O, and uses MPI for parallel processing. This configuration facilitates the direct coupling of streamflow routing, reservoir, and irrigation processes within VIC. The image driver is the foundation of the CESM driver; which couples VIC to CESM's CPL7 and a prognostic atmosphere. Finally, we have added a Python driver that provides access to the functions and datatypes of VIC's physical core from a Python interface. This presentation demonstrates how reconfiguring legacy source code extends the life and applicability of a research model.

  9. Automated Detection and Analysis of Interplanetary Shocks with Real-Time Application

    NASA Astrophysics Data System (ADS)

    Vorotnikov, V.; Smith, C. W.; Hu, Q.; Szabo, A.; Skoug, R. M.; Cohen, C. M.

    2006-12-01

    The ACE real-time data stream provides web-based now-casting capabilities for solar wind conditions upstream of Earth. Our goal is to provide an automated code that finds and analyzes interplanetary shocks as they occur for possible real-time application to space weather nowcasting. Shock analysis algorithms based on the Rankine-Hugoniot jump conditions exist and are in wide-spread use today for the interactive analysis of interplanetary shocks yielding parameters such as shock speed and propagation direction and shock strength in the form of compression ratios. Although these codes can be automated in a reasonable manner to yield solutions not far from those obtained by user-directed interactive analysis, event detection presents an added obstacle and the first step in a fully automated analysis. We present a fully automated Rankine-Hugoniot analysis code that can scan the ACE science data, find shock candidates, analyze the events, obtain solutions in good agreement with those derived from interactive applications, and dismiss false positive shock candidates on the basis of the conservation equations. The intent is to make this code available to NOAA for use in real-time space weather applications. The code has the added advantage of being able to scan spacecraft data sets to provide shock solutions for use outside real-time applications and can easily be applied to science-quality data sets from other missions. Use of the code for this purpose will also be explored.

  10. Duct flow nonuniformities for Space Shuttle Main Engine (SSME)

    NASA Technical Reports Server (NTRS)

    1987-01-01

    A three-duct Space Shuttle Main Engine (SSME) Hot Gas Manifold geometry code was developed for use. The methodology of the program is described, recommendations on its implementation made, and an input guide, input deck listing, and a source code listing provided. The code listing is strewn with an abundance of comments to assist the user in following its development and logic. A working source deck will be provided. A thorough analysis was made of the proper boundary conditions and chemistry kinetics necessary for an accurate computational analysis of the flow environment in the SSME fuel side preburner chamber during the initial startup transient. Pertinent results were presented to facilitate incorporation of these findings into an appropriate CFD code. The computation must be a turbulent computation, since the flow field turbulent mixing will have a profound effect on the chemistry. Because of the additional equations demanded by the chemistry model it is recommended that for expediency a simple algebraic mixing length model be adopted. Performing this computation for all or selected time intervals of the startup time will require an abundance of computer CPU time regardless of the specific CFD code selected.

  11. SHIELD and HZETRN comparisons of pion production cross sections

    NASA Astrophysics Data System (ADS)

    Norbury, John W.; Sobolevsky, Nikolai; Werneth, Charles M.

    2018-03-01

    A program of comparing American (NASA) and Russian (ROSCOSMOS) space radiation transport codes has recently begun, and the first paper directly comparing the NASA and ROSCOSMOS space radiation transport codes, HZETRN and SHIELD respectively has recently appeared. The present work represents the second time that NASA and ROSCOSMOS calculations have been directly compared, and the focus here is on models of pion production cross sections used in the two transport codes mentioned above. It was found that these models are in overall moderate agreement with each other and with experimental data. Disagreements that were found are discussed.

  12. Behind the Mosaic: Insurgent Centers of Gravity and Counterinsurgency

    DTIC Science & Technology

    2011-12-01

    centers of gravity vary by time, space , and purpose. While Clausewitz’s key statement on a center of gravity defines a single center of gravity, he...explicitly or implicitly, that multiple centers of gravity can vary with time, space , and purpose. Shimon Naveh Retired Israeli Reserve Brigadier...century military forces, which in turn expanded operations in time and space . The integration of operations distributed in time and space distributed

  13. The HydroServer Platform for Sharing Hydrologic Data

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Horsburgh, J. S.; Schreuders, K.; Maidment, D. R.; Zaslavsky, I.; Valentine, D. W.

    2010-12-01

    The CUAHSI Hydrologic Information System (HIS) is an internet based system that supports sharing of hydrologic data. HIS consists of databases connected using the Internet through Web services, as well as software for data discovery, access, and publication. The HIS system architecture is comprised of servers for publishing and sharing data, a centralized catalog to support cross server data discovery and a desktop client to access and analyze data. This paper focuses on HydroServer, the component developed for sharing and publishing space-time hydrologic datasets. A HydroServer is a computer server that contains a collection of databases, web services, tools, and software applications that allow data producers to store, publish, and manage the data from an experimental watershed or project site. HydroServer is designed to permit publication of data as part of a distributed national/international system, while still locally managing access to the data. We describe the HydroServer architecture and software stack, including tools for managing and publishing time series data for fixed point monitoring sites as well as spatially distributed, GIS datasets that describe a particular study area, watershed, or region. HydroServer adopts a standards based approach to data publication, relying on accepted and emerging standards for data storage and transfer. CUAHSI developed HydroServer code is free with community code development managed through the codeplex open source code repository and development system. There is some reliance on widely used commercial software for general purpose and standard data publication capability. The sharing of data in a common format is one way to stimulate interdisciplinary research and collaboration. It is anticipated that the growing, distributed network of HydroServers will facilitate cross-site comparisons and large scale studies that synthesize information from diverse settings, making the network as a whole greater than the sum of its parts in advancing hydrologic research. Details of the CUAHSI HIS can be found at http://his.cuahsi.org, and HydroServer codeplex site http://hydroserver.codeplex.com.

  14. Renovating and Reconstructing in Phases--Specifying Phased Construction.

    ERIC Educational Resources Information Center

    Bunzick, John

    2002-01-01

    Discusses planning for phased school construction projects, including effects on occupancy (for example, construction adjacent to occupied space, construction procedure safety zones near occupied areas, and code-complying means of egress), effects on building systems (such as heating and cooling equipment and power distribution), and contract…

  15. Multi-optimization Criteria-based Robot Behavioral Adaptability and Motion Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, Francois G.

    2002-06-01

    Robotic tasks are typically defined in Task Space (e.g., the 3-D World), whereas robots are controlled in Joint Space (motors). The transformation from Task Space to Joint Space must consider the task objectives (e.g., high precision, strength optimization, torque optimization), the task constraints (e.g., obstacles, joint limits, non-holonomic constraints, contact or tool task constraints), and the robot kinematics configuration (e.g., tools, type of joints, mobile platform, manipulator, modular additions, locked joints). Commercially available robots are optimized for a specific set of tasks, objectives and constraints and, therefore, their control codes are extremely specific to a particular set of conditions. Thus,more » there exist a multiplicity of codes, each handling a particular set of conditions, but none suitable for use on robots with widely varying tasks, objectives, constraints, or environments. On the other hand, most DOE missions and tasks are typically ''batches of one''. Attempting to use commercial codes for such work requires significant personnel and schedule costs for re-programming or adding code to the robots whenever a change in task objective, robot configuration, number and type of constraint, etc. occurs. The objective of our project is to develop a ''generic code'' to implement this Task-space to Joint-Space transformation that would allow robot behavior adaptation, in real time (at loop rate), to changes in task objectives, number and type of constraints, modes of controls, kinematics configuration (e.g., new tools, added module). Our specific goal is to develop a single code for the general solution of under-specified systems of algebraic equations that is suitable for solving the inverse kinematics of robots, is useable for all types of robots (mobile robots, manipulators, mobile manipulators, etc.) with no limitation on the number of joints and the number of controlled Task-Space variables, can adapt to real time changes in number and type of constraints and in task objectives, and can adapt to changes in kinematics configurations (change of module, change of tool, joint failure adaptation, etc.).« less

  16. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation.

    PubMed

    Kim, Sangroh; Yoshizumi, Terry T; Yin, Fang-Fang; Chetty, Indrin J

    2013-04-21

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan-scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the 'ISource = 8: Phase-Space Source Incident from Multiple Directions' in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.

  17. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation

    NASA Astrophysics Data System (ADS)

    Kim, Sangroh; Yoshizumi, Terry T.; Yin, Fang-Fang; Chetty, Indrin J.

    2013-04-01

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan—scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the ‘ISource = 8: Phase-Space Source Incident from Multiple Directions’ in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.

  18. High performance and highly reliable Raman-based distributed temperature sensors based on correlation-coded OTDR and multimode graded-index fibers

    NASA Astrophysics Data System (ADS)

    Soto, M. A.; Sahu, P. K.; Faralli, S.; Sacchi, G.; Bolognini, G.; Di Pasquale, F.; Nebendahl, B.; Rueck, C.

    2007-07-01

    The performance of distributed temperature sensor systems based on spontaneous Raman scattering and coded OTDR are investigated. The evaluated DTS system, which is based on correlation coding, uses graded-index multimode fibers, operates over short-to-medium distances (up to 8 km) with high spatial and temperature resolutions (better than 1 m and 0.3 K at 4 km distance with 10 min measuring time) and high repeatability even throughout a wide temperature range.

  19. Incorporating Manual and Autonomous Code Generation

    NASA Technical Reports Server (NTRS)

    McComas, David

    1998-01-01

    Code can be generated manually or using code-generated software tools, but how do you interpret the two? This article looks at a design methodology that combines object-oriented design with autonomic code generation for attitude control flight software. Recent improvements in space flight computers are allowing software engineers to spend more time engineering the applications software. The application developed was the attitude control flight software for an astronomical satellite called the Microwave Anisotropy Probe (MAP). The MAP flight system is being designed, developed, and integrated at NASA's Goddard Space Flight Center. The MAP controls engineers are using Integrated Systems Inc.'s MATRIXx for their controls analysis. In addition to providing a graphical analysis for an environment, MATRIXx includes an autonomic code generation facility called AutoCode. This article examines the forces that shaped the final design and describes three highlights of the design process: (1) Defining the manual to autonomic code interface; (2) Applying object-oriented design to the manual flight code; (3) Implementing the object-oriented design in C.

  20. Conditional optimal spacing in exponential distribution.

    PubMed

    Park, Sangun

    2006-12-01

    In this paper, we propose the conditional optimal spacing defined as the optimal spacing after specifying a predetermined order statistic. If we specify a censoring time, then the optimal inspection times for grouped inspection can be determined from this conditional optimal spacing. We take an example of exponential distribution, and provide a simple method of finding the conditional optimal spacing.

  1. COnstrained Data Extrapolation (CODE): A new approach for high definition vascular imaging from low resolution data.

    PubMed

    Song, Yang; Hamtaei, Ehsan; Sethi, Sean K; Yang, Guang; Xie, Haibin; Mark Haacke, E

    2017-12-01

    To introduce a new approach to reconstruct high definition vascular images using COnstrained Data Extrapolation (CODE) and evaluate its capability in estimating vessel area and stenosis. CODE is based on the constraint that the full width half maximum of a vessel can be accurately estimated and, since it represents the best estimate for the width of the object, higher k-space data can be generated from this information. To demonstrate the potential of extracting high definition vessel edges using low resolution data, both simulated and human data were analyzed to better visualize the vessels and to quantify both area and stenosis measurements. The results from CODE using one-fourth of the fully sampled k-space data were compared with a compressed sensing (CS) reconstruction approach using the same total amount of data but spread out between the center of k-space and the outer portions of the original k-space to accelerate data acquisition by a factor of four. For a sufficiently high signal-to-noise ratio (SNR) such as 16 (8), we found that objects as small as 3 voxels in the 25% under-sampled data (6 voxels when zero-filled) could be used for CODE and CS and provide an estimate of area with an error <5% (10%). For estimating up to a 70% stenosis with an SNR of 4, CODE was found to be more robust to noise than CS having a smaller variance albeit a larger bias. Reconstruction times were >200 (30) times faster for CODE compared to CS in the simulated (human) data. CODE was capable of producing sharp sub-voxel edges and accurately estimating stenosis to within 5% for clinically relevant studies of vessels with a width of at least 3pixels in the low resolution images. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Analysis of automatic repeat request methods for deep-space downlinks

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Ekroot, L.

    1995-01-01

    Automatic repeat request (ARQ) methods cannot increase the capacity of a memoryless channel. However, they can be used to decrease the complexity of the channel-coding system to achieve essentially error-free transmission and to reduce link margins when the channel characteristics are poorly predictable. This article considers ARQ methods on a power-limited channel (e.g., the deep-space channel), where it is important to minimize the total power needed to transmit the data, as opposed to a bandwidth-limited channel (e.g., terrestrial data links), where the spectral efficiency or the total required transmission time is the most relevant performance measure. In the analysis, we compare the performance of three reference concatenated coded systems used in actual deep-space missions to that obtainable by ARQ methods using the same codes, in terms of required power, time to transmit with a given number of retransmissions, and achievable probability of word error. The ultimate limits of ARQ with an arbitrary number of retransmissions are also derived.

  3. Prescription opioid poisoning across urban and rural areas: identifying vulnerable groups and geographic areas.

    PubMed

    Cerdá, Magdalena; Gaidus, Andrew; Keyes, Katherine M; Ponicki, William; Martins, Silvia; Galea, Sandro; Gruenewald, Paul

    2017-01-01

    To determine (1) whether prescription opioid poisoning (PO) hospital discharges spread across space over time, (2) the locations of 'hot-spots' of PO-related hospital discharges, (3) how features of the local environment contribute to the growth in PO-related hospital discharges and (4) where each environmental feature makes the strongest contribution. Hierarchical Bayesian Poisson space-time analysis to relate annual discharges from community hospitals to postal code characteristics over 10 years. California, USA. Residents of 18 517 postal codes in California, 2001-11. Annual postal code-level counts of hospital discharges due to PO poisoning were related to postal code pharmacy density, measures of medical need for POs (i.e. rates of cancer and arthritis-related hospital discharges), economic stressors (i.e. median household income, percentage of families in poverty and the unemployment rate) and concentration of manual labor industries. PO-related hospital discharges spread from rural and suburban/exurban 'hot-spots' to urban areas. They increased more in postal codes with greater pharmacy density [rate ratio (RR) = 1.03; 95% credible interval (CI) = 1.01, 1.05], more arthritis-related hospital discharges (RR = 1.08; 95% CI = 1.06, 1.11), lower income (RR = 0.85; 95% CI = 0.83, 0.87) and more manual labor industries (RR = 1.15; 95% CI = 1.10, 1.19 for construction; RR = 1.12; 95% CI = 1.04, 1.20 for manufacturing industries). Changes in pharmacy density primarily affected PO-related discharges in urban areas, while changes in income and manual labor industries especially affected PO-related discharges in suburban/exurban and rural areas. Hospital discharge rates for prescription opioid (PO) poisoning spread from rural and suburban/exurban hot-spots to urban areas, suggesting spatial contagion. The distribution of age-related and work-place-related sources of medical need for POs in rural areas and, to a lesser extent, the availability of POs through pharmacies in urban areas, partly explain the growth of PO poisoning across California, USA. © 2016 Society for the Study of Addiction.

  4. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loughry, Thomas A.

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to tenmore » times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.« less

  5. Cold flow testing of the Space Shuttle Main Engine high pressure fuel turbine model

    NASA Technical Reports Server (NTRS)

    Hudson, Susan T.; Gaddis, Stephen W.; Johnson, P. D.; Boynton, James L.

    1991-01-01

    In order to experimentally determine the performance of the Space Shuttle Main Engine (SSME) High Pressure Fuel Turbopump (HPFTP) turbine, a 'cold' air flow turbine test program was established at NASA's Marshall Space Flight Center. As part of this test program, a baseline test of Rocketdyne's HPFTP turbine has been completed. The turbine performance and turbine diagnostics such as airfoil surface static pressure distributions, static pressure drops through the turbine, and exit swirl angles were investigated at the turbine design point, over its operating range, and at extreme off-design points. The data was compared to pretest predictions with good results. The test data has been used to improve meanline prediction codes and is now being used to validate various three-dimensional codes. The data will also be scaled to engine conditions and used to improve the SSME steady-state performance model.

  6. Methods of alleviation of ionospheric scintillation effects on digital communications

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1974-01-01

    The degradation of the performance of digital communication systems because of ionospheric scintillation effects can be reduced either by diversity techniques or by coding. The effectiveness of traditional space-diversity, frequency-diversity and time-diversity techniques is reviewed and design considerations isolated. Time-diversity signaling is then treated as an extremely simple form of coding. More advanced coding methods, such as diffuse threshold decoding and burst-trapping decoding, which appear attractive in combatting scintillation effects are discussed and design considerations noted. Finally, adaptive coding techniques appropriate when the general state of the channel is known are discussed.

  7. SiGN-SSM: open source parallel software for estimating gene networks with state space models.

    PubMed

    Tamada, Yoshinori; Yamaguchi, Rui; Imoto, Seiya; Hirose, Osamu; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru

    2011-04-15

    SiGN-SSM is an open-source gene network estimation software able to run in parallel on PCs and massively parallel supercomputers. The software estimates a state space model (SSM), that is a statistical dynamic model suitable for analyzing short time and/or replicated time series gene expression profiles. SiGN-SSM implements a novel parameter constraint effective to stabilize the estimated models. Also, by using a supercomputer, it is able to determine the gene network structure by a statistical permutation test in a practical time. SiGN-SSM is applicable not only to analyzing temporal regulatory dependencies between genes, but also to extracting the differentially regulated genes from time series expression profiles. SiGN-SSM is distributed under GNU Affero General Public Licence (GNU AGPL) version 3 and can be downloaded at http://sign.hgc.jp/signssm/. The pre-compiled binaries for some architectures are available in addition to the source code. The pre-installed binaries are also available on the Human Genome Center supercomputer system. The online manual and the supplementary information of SiGN-SSM is available on our web site. tamada@ims.u-tokyo.ac.jp.

  8. An Infrastructure for UML-Based Code Generation Tools

    NASA Astrophysics Data System (ADS)

    Wehrmeister, Marco A.; Freitas, Edison P.; Pereira, Carlos E.

    The use of Model-Driven Engineering (MDE) techniques in the domain of distributed embedded real-time systems are gain importance in order to cope with the increasing design complexity of such systems. This paper discusses an infrastructure created to build GenERTiCA, a flexible tool that supports a MDE approach, which uses aspect-oriented concepts to handle non-functional requirements from embedded and real-time systems domain. GenERTiCA generates source code from UML models, and also performs weaving of aspects, which have been specified within the UML model. Additionally, this paper discusses the Distributed Embedded Real-Time Compact Specification (DERCS), a PIM created to support UML-based code generation tools. Some heuristics to transform UML models into DERCS, which have been implemented in GenERTiCA, are also discussed.

  9. Generalized fluid theory including non-Maxwellian kinetic effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izacard, Olivier

    The results obtained by the plasma physics community for the validation and the prediction of turbulence and transport in magnetized plasmas come mainly from the use of very central processing unit (CPU)-consuming particle-in-cell or (gyro)kinetic codes which naturally include non-Maxwellian kinetic effects. To date, fluid codes are not considered to be relevant for the description of these kinetic effects. Here, after revisiting the limitations of the current fluid theory developed in the 19th century, we generalize the fluid theory including kinetic effects such as non-Maxwellian super-thermal tails with as few fluid equations as possible. The collisionless and collisional fluid closuresmore » from the nonlinear Landau Fokker–Planck collision operator are shown for an arbitrary collisionality. Indeed, the first fluid models associated with two examples of collisionless fluid closures are obtained by assuming an analytic non-Maxwellian distribution function. One of the main differences with the literature is our analytic representation of the distribution function in the velocity phase space with as few hidden variables as possible thanks to the use of non-orthogonal basis sets. These new non-Maxwellian fluid equations could initiate the next generation of fluid codes including kinetic effects and can be expanded to other scientific disciplines such as astrophysics, condensed matter or hydrodynamics. As a validation test, we perform a numerical simulation based on a minimal reduced INMDF fluid model. The result of this test is the discovery of the origin of particle and heat diffusion. The diffusion is due to the competition between a growing INMDF on short time scales due to spatial gradients and the thermalization on longer time scales. Here, the results shown here could provide the insights to break some of the unsolved puzzles of turbulence.« less

  10. Generalized fluid theory including non-Maxwellian kinetic effects

    DOE PAGES

    Izacard, Olivier

    2017-03-29

    The results obtained by the plasma physics community for the validation and the prediction of turbulence and transport in magnetized plasmas come mainly from the use of very central processing unit (CPU)-consuming particle-in-cell or (gyro)kinetic codes which naturally include non-Maxwellian kinetic effects. To date, fluid codes are not considered to be relevant for the description of these kinetic effects. Here, after revisiting the limitations of the current fluid theory developed in the 19th century, we generalize the fluid theory including kinetic effects such as non-Maxwellian super-thermal tails with as few fluid equations as possible. The collisionless and collisional fluid closuresmore » from the nonlinear Landau Fokker–Planck collision operator are shown for an arbitrary collisionality. Indeed, the first fluid models associated with two examples of collisionless fluid closures are obtained by assuming an analytic non-Maxwellian distribution function. One of the main differences with the literature is our analytic representation of the distribution function in the velocity phase space with as few hidden variables as possible thanks to the use of non-orthogonal basis sets. These new non-Maxwellian fluid equations could initiate the next generation of fluid codes including kinetic effects and can be expanded to other scientific disciplines such as astrophysics, condensed matter or hydrodynamics. As a validation test, we perform a numerical simulation based on a minimal reduced INMDF fluid model. The result of this test is the discovery of the origin of particle and heat diffusion. The diffusion is due to the competition between a growing INMDF on short time scales due to spatial gradients and the thermalization on longer time scales. Here, the results shown here could provide the insights to break some of the unsolved puzzles of turbulence.« less

  11. Simulation of ultra-high energy photon propagation in the geomagnetic field

    NASA Astrophysics Data System (ADS)

    Homola, P.; Góra, D.; Heck, D.; Klages, H.; PeĶala, J.; Risse, M.; Wilczyńska, B.; Wilczyński, H.

    2005-12-01

    The identification of primary photons or specifying stringent limits on the photon flux is of major importance for understanding the origin of ultra-high energy (UHE) cosmic rays. UHE photons can initiate particle cascades in the geomagnetic field, which leads to significant changes in the subsequent atmospheric shower development. We present a Monte Carlo program allowing detailed studies of conversion and cascading of UHE photons in the geomagnetic field. The program named PRESHOWER can be used both as an independent tool or together with a shower simulation code. With the stand-alone version of the code it is possible to investigate various properties of the particle cascade induced by UHE photons interacting in the Earth's magnetic field before entering the Earth's atmosphere. Combining this program with an extensive air shower simulation code such as CORSIKA offers the possibility of investigating signatures of photon-initiated showers. In particular, features can be studied that help to discern such showers from the ones induced by hadrons. As an illustration, calculations for the conditions of the southern part of the Pierre Auger Observatory are presented. Catalogue identifier:ADWG Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWG Program obtainable: CPC Program Library, Quen's University of Belfast, N. Ireland Computer on which the program has been thoroughly tested:Intel-Pentium based PC Operating system:Linux, DEC-Unix Programming language used:C, FORTRAN 77 Memory required to execute with typical data:<100 kB No. of bits in a word:32 Has the code been vectorized?:no Number of lines in distributed program, including test data, etc.:2567 Number of bytes in distributed program, including test data, etc.:25 690 Distribution format:tar.gz Other procedures used in PRESHOWER:IGRF [N.A. Tsyganenko, National Space Science Data Center, NASA GSFC, Greenbelt, MD 20771, USA, http://nssdc.gsfc.nasa.gov/space/model/magnetos/data-based/geopack.html], bessik, ran2 [Numerical Recipes, http://www.nr.com]. Nature of the physical problem:Simulation of a cascade of particles initiated by UHE photon passing through the geomagnetic field above the Earth's atmosphere. Method of solution: The primary photon is tracked until its conversion into ee pair or until it reaches the upper atmosphere. If conversion occurred each individual particle in the resultant preshower is checked for either bremsstrahlung radiation (electrons) or secondary gamma conversion (photons). The procedure ends at the top of atmosphere and the shower particle data are saved. Restrictions on the complexity of the problem: Gamma conversion into particles other than electron pair has not been taken into account. Typical running time: 100 preshower events with primary energy 10 eV require a 800 MHz CPU time of about 50 min, with 10 eV the simulation time for 100 events grows up to 500 min.

  12. Development of a new EMP code at LANL

    NASA Astrophysics Data System (ADS)

    Colman, J. J.; Roussel-Dupré, R. A.; Symbalisty, E. M.; Triplett, L. A.; Travis, B. J.

    2006-05-01

    A new code for modeling the generation of an electromagnetic pulse (EMP) by a nuclear explosion in the atmosphere is being developed. The source of the EMP is the Compton current produced by the prompt radiation (γ-rays, X-rays, and neutrons) of the detonation. As a first step in building a multi- dimensional EMP code we have written three kinetic codes, Plume, Swarm, and Rad. Plume models the transport of energetic electrons in air. The Plume code solves the relativistic Fokker-Planck equation over a specified energy range that can include ~ 3 keV to 50 MeV and computes the resulting electron distribution function at each cell in a two dimensional spatial grid. The energetic electrons are allowed to transport, scatter, and experience Coulombic drag. Swarm models the transport of lower energy electrons in air, spanning 0.005 eV to 30 keV. The swarm code performs a full 2-D solution to the Boltzmann equation for electrons in the presence of an applied electric field. Over this energy range the relevant processes to be tracked are elastic scattering, three body attachment, two body attachment, rotational excitation, vibrational excitation, electronic excitation, and ionization. All of these occur due to collisions between the electrons and neutral bodies in air. The Rad code solves the full radiation transfer equation in the energy range of 1 keV to 100 MeV. It includes effects of photo-absorption, Compton scattering, and pair-production. All of these codes employ a spherical coordinate system in momentum space and a cylindrical coordinate system in configuration space. The "z" axis of the momentum and configuration spaces is assumed to be parallel and we are currently also assuming complete spatial symmetry around the "z" axis. Benchmarking for each of these codes will be discussed as well as the way forward towards an integrated modern EMP code.

  13. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    NASA Astrophysics Data System (ADS)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  14. Modeling Cometary Coma with a Three Dimensional, Anisotropic Multiple Scattering Distributed Processing Code

    NASA Technical Reports Server (NTRS)

    Luchini, Chris B.

    1997-01-01

    Development of camera and instrument simulations for space exploration requires the development of scientifically accurate models of the objects to be studied. Several planned cometary missions have prompted the development of a three dimensional, multi-spectral, anisotropic multiple scattering model of cometary coma.

  15. Adaptive Correlation Space Adjusted Open-Loop Tracking Approach for Vehicle Positioning with Global Navigation Satellite System in Urban Areas

    PubMed Central

    Ruan, Hang; Li, Jian; Zhang, Lei; Long, Teng

    2015-01-01

    For vehicle positioning with Global Navigation Satellite System (GNSS) in urban areas, open-loop tracking shows better performance because of its high sensitivity and superior robustness against multipath. However, no previous study has focused on the effects of the code search grid size on the code phase measurement accuracy of open-loop tracking. Traditional open-loop tracking methods are performed by the batch correlators with fixed correlation space. The code search grid size, which is the correlation space, is a constant empirical value and the code phase measuring accuracy will be largely degraded due to the improper grid size, especially when the signal carrier-to-noise density ratio (C/N0) varies. In this study, the Adaptive Correlation Space Adjusted Open-Loop Tracking Approach (ACSA-OLTA) is proposed to improve the code phase measurement dependent pseudo range accuracy. In ACSA-OLTA, the correlation space is adjusted according to the signal C/N0. The novel Equivalent Weighted Pseudo Range Error (EWPRE) is raised to obtain the optimal code search grid sizes for different C/N0. The code phase measuring errors of different measurement calculation methods are analyzed for the first time. The measurement calculation strategy of ACSA-OLTA is derived from the analysis to further improve the accuracy but reduce the correlator consumption. Performance simulation and real tests confirm that the pseudo range and positioning accuracy of ASCA-OLTA are better than the traditional open-loop tracking methods in the usual scenarios of urban area. PMID:26343683

  16. The adaption and use of research codes for performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebetrau, A.M.

    1987-05-01

    Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less

  17. Anisotropic Resistivity Forward Modelling Using Automatic Generated Higher-order Finite Element Codes

    NASA Astrophysics Data System (ADS)

    Wang, W.; Liu, J.

    2016-12-01

    Forward modelling is the general way to obtain responses of geoelectrical structures. Field investigators might find it useful for planning surveys and choosing optimal electrode configurations with respect to their targets. During the past few decades much effort has been put into the development of numerical forward codes, such as integral equation method, finite difference method and finite element method. Nowadays, most researchers prefer the finite element method (FEM) for its flexible meshing scheme, which can handle models with complex geometry. Resistivity Modelling with commercial sofewares such as ANSYS and COMSOL is convenient, but like working with a black box. Modifying the existed codes or developing new codes is somehow a long period. We present a new way to obtain resistivity forward modelling codes quickly, which is based on the commercial sofeware FEPG (Finite element Program Generator). Just with several demanding scripts, FEPG could generate FORTRAN program framework which can easily be altered to adjust our targets. By supposing the electric potential is quadratic in each element of a two-layer model, we obtain quite accurate results with errors less than 1%, while more than 5% errors could appear by linear FE codes. The anisotropic half-space model is supposed to concern vertical distributed fractures. The measured apparent resistivities along the fractures are bigger than results from its orthogonal direction, which are opposite of the true resistivities. Interpretation could be misunderstood if this anisotropic paradox is ignored. The technique we used can obtain scientific codes in a short time. The generated powerful FORTRAN codes could reach accurate results by higher-order assumption and can handle anisotropy to make better interpretations. The method we used could be expand easily to other domain where FE codes are needed.

  18. Evaluating methods for estimating space-time paths of individuals in calculating long-term personal exposure to air pollution

    NASA Astrophysics Data System (ADS)

    Schmitz, Oliver; Soenario, Ivan; Vaartjes, Ilonca; Strak, Maciek; Hoek, Gerard; Brunekreef, Bert; Dijst, Martin; Karssenberg, Derek

    2016-04-01

    Air pollution is one of the major concerns for human health. Associations between air pollution and health are often calculated using long-term (i.e. years to decades) information on personal exposure for each individual in a cohort. Personal exposure is the air pollution aggregated along the space-time path visited by an individual. As air pollution may vary considerably in space and time, for instance due to motorised traffic, the estimation of the spatio-temporal location of a persons' space-time path is important to identify the personal exposure. However, long term exposure is mostly calculated using the air pollution concentration at the x, y location of someone's home which does not consider that individuals are mobile (commuting, recreation, relocation). This assumption is often made as it is a major challenge to estimate space-time paths for all individuals in large cohorts, mostly because limited information on mobility of individuals is available. We address this issue by evaluating multiple approaches for the calculation of space-time paths, thereby estimating the personal exposure along these space-time paths with hyper resolution air pollution maps at national scale. This allows us to evaluate the effect of the space-time path and resulting personal exposure. Air pollution (e.g. NO2, PM10) was mapped for the entire Netherlands at a resolution of 5×5 m2 using the land use regression models developed in the European Study of Cohorts for Air Pollution Effects (ESCAPE, http://escapeproject.eu/) and the open source software PCRaster (http://www.pcraster.eu). The models use predictor variables like population density, land use, and traffic related data sets, and are able to model spatial variation and within-city variability of annual average concentration values. We approximated space-time paths for all individuals in a cohort using various aggregations, including those representing space-time paths as the outline of a persons' home or associated parcel of land, the 4 digit postal code area or neighbourhood of a persons' home, circular areas around the home, and spatial probability distributions of space-time paths during commuting. Personal exposure was estimated by averaging concentrations over these space-time paths, for each individual in a cohort. Preliminary results show considerable differences of a persons' exposure using these various approaches of space-time path aggregation, presumably because air pollution shows large variation over short distances.

  19. Global, real-time ionosphere specification for end-user communication and navigation products

    NASA Astrophysics Data System (ADS)

    Tobiska, W.; Carlson, H. C.; Schunk, R. W.; Thompson, D. C.; Sojka, J. J.; Scherliess, L.; Zhu, L.; Gardner, L. C.

    2010-12-01

    Space weather’s effects upon the near-Earth environment are due to dynamic changes in the energy transfer processes from the Sun’s photons, particles, and fields. Of the space environment domains that are affected by space weather, the ionosphere is the key region that affects communication and navigation systems. The Utah State University (USU) Space Weather Center (SWC) is a developer and producer of commercial space weather applications. A key system-level component for providing timely information about the effects of space weather is the Global Assimilation of Ionospheric Measurements (GAIM) system. GAIM, operated by SWC, improves real-time communication and navigation systems by continuously ingesting up to 10,000 slant TEC measurements every 15-minutes from approximately 500 stations. Using a Kalman filter, the background output from the physics-based Ionosphere Forecast Model (IFM) is adjusted to more accurately represent the actual ionosphere. An improved ionosphere leads to more useful derivative products. For example, SWC runs operational code, using GAIM, to calculate and report the global radio high frequency (HF) signal strengths for 24 world cities. This product is updated every 15 minutes at http://spaceweather.usu.edu and used by amateur radio operators. SWC also developed and provides through Apple iTunes the widely used real-time space weather iPhone app called SpaceWx for public space weather education. SpaceWx displays the real-time solar, heliosphere, magnetosphere, thermosphere, and ionosphere drivers to changes in the total electron content, for example. This smart phone app is tip of the “iceberg” of automated systems that provide space weather data; it permits instant understanding of the environment surrounding Earth as it dynamically changes. SpaceWx depends upon a distributed network that connects satellite and ground-based data streams with algorithms to quickly process the measurements into geophysical data, incorporate those data into operational space physics models, and finally generate visualization products such as the images, plots, and alerts that can be viewed on SpaceWx. In a real sense, the space weather community is now able to transition research models into operations through “proofing” products such as real-time disseminated of information through smart phones. We describe upcoming improvements for moving space weather information through automated systems into final derivative products.

  20. Trading Speed and Accuracy by Coding Time: A Coupled-circuit Cortical Model

    PubMed Central

    Standage, Dominic; You, Hongzhi; Wang, Da-Hui; Dorris, Michael C.

    2013-01-01

    Our actions take place in space and time, but despite the role of time in decision theory and the growing acknowledgement that the encoding of time is crucial to behaviour, few studies have considered the interactions between neural codes for objects in space and for elapsed time during perceptual decisions. The speed-accuracy trade-off (SAT) provides a window into spatiotemporal interactions. Our hypothesis is that temporal coding determines the rate at which spatial evidence is integrated, controlling the SAT by gain modulation. Here, we propose that local cortical circuits are inherently suited to the relevant spatial and temporal coding. In simulations of an interval estimation task, we use a generic local-circuit model to encode time by ‘climbing’ activity, seen in cortex during tasks with a timing requirement. The model is a network of simulated pyramidal cells and inhibitory interneurons, connected by conductance synapses. A simple learning rule enables the network to quickly produce new interval estimates, which show signature characteristics of estimates by experimental subjects. Analysis of network dynamics formally characterizes this generic, local-circuit timing mechanism. In simulations of a perceptual decision task, we couple two such networks. Network function is determined only by spatial selectivity and NMDA receptor conductance strength; all other parameters are identical. To trade speed and accuracy, the timing network simply learns longer or shorter intervals, driving the rate of downstream decision processing by spatially non-selective input, an established form of gain modulation. Like the timing network's interval estimates, decision times show signature characteristics of those by experimental subjects. Overall, we propose, demonstrate and analyse a generic mechanism for timing, a generic mechanism for modulation of decision processing by temporal codes, and we make predictions for experimental verification. PMID:23592967

  1. Theoretical and experimental studies of turbo product code with time diversity in free space optical communication.

    PubMed

    Han, Yaoqiang; Dang, Anhong; Ren, Yongxiong; Tang, Junxiong; Guo, Hong

    2010-12-20

    In free space optical communication (FSOC) systems, channel fading caused by atmospheric turbulence degrades the system performance seriously. However, channel coding combined with diversity techniques can be exploited to mitigate channel fading. In this paper, based on the experimental study of the channel fading effects, we propose to use turbo product code (TPC) as the channel coding scheme, which features good resistance to burst errors and no error floor. However, only channel coding cannot cope with burst errors caused by channel fading, interleaving is also used. We investigate the efficiency of interleaving for different interleaving depths, and then the optimum interleaving depth for TPC is also determined. Finally, an experimental study of TPC with interleaving is demonstrated, and we show that TPC with interleaving can significantly mitigate channel fading in FSOC systems.

  2. Numerical simulation of ion charge breeding in electron beam ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, L., E-mail: zhao@far-tech.com; Kim, Jin-Soo

    2014-02-15

    The Electron Beam Ion Source particle-in-cell code (EBIS-PIC) tracks ions in an EBIS electron beam while updating electric potential self-consistently and atomic processes by the Monte Carlo method. Recent improvements to the code are reported in this paper. The ionization module has been improved by using experimental ionization energies and shell effects. The acceptance of injected ions and the emittance of extracted ion beam are calculated by extending EBIS-PIC to the beam line transport region. An EBIS-PIC simulation is performed for a Cs charge-breeding experiment at BNL. The charge state distribution agrees well with experiments, and additional simulation results ofmore » radial profiles and velocity space distributions of the trapped ions are presented.« less

  3. Energy efficient rateless codes for high speed data transfer over free space optical channels

    NASA Astrophysics Data System (ADS)

    Prakash, Geetha; Kulkarni, Muralidhar; Acharya, U. S.

    2015-03-01

    Terrestrial Free Space Optical (FSO) links transmit information by using the atmosphere (free space) as a medium. In this paper, we have investigated the use of Luby Transform (LT) codes as a means to mitigate the effects of data corruption induced by imperfect channel which usually takes the form of lost or corrupted packets. LT codes, which are a class of Fountain codes, can be used independent of the channel rate and as many code words as required can be generated to recover all the message bits irrespective of the channel performance. Achieving error free high data rates with limited energy resources is possible with FSO systems if error correction codes with minimal overheads on the power can be used. We also employ a combination of Binary Phase Shift Keying (BPSK) with provision for modification of threshold and optimized LT codes with belief propagation for decoding. These techniques provide additional protection even under strong turbulence regimes. Automatic Repeat Request (ARQ) is another method of improving link reliability. Performance of ARQ is limited by the number of retransmissions and the corresponding time delay. We prove through theoretical computations and simulations that LT codes consume less energy per bit. We validate the feasibility of using energy efficient LT codes over ARQ for FSO links to be used in optical wireless sensor networks within the eye safety limits.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics.more » In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.« less

  5. Independent coding of absolute duration and distance magnitudes in the prefrontal cortex

    PubMed Central

    Marcos, Encarni; Tsujimoto, Satoshi

    2016-01-01

    The estimation of space and time can interfere with each other, and neuroimaging studies have shown overlapping activation in the parietal and prefrontal cortical areas. We used duration and distance discrimination tasks to determine whether space and time share resources in prefrontal cortex (PF) neurons. Monkeys were required to report which of two stimuli, a red circle or blue square, presented sequentially, were longer and farther, respectively, in the duration and distance tasks. In a previous study, we showed that relative duration and distance are coded by different populations of neurons and that the only common representation is related to goal coding. Here, we examined the coding of absolute duration and distance. Our results support a model of independent coding of absolute duration and distance metrics by demonstrating that not only relative magnitude but also absolute magnitude are independently coded in the PF. NEW & NOTEWORTHY Human behavioral studies have shown that spatial and duration judgments can interfere with each other. We investigated the neural representation of such magnitudes in the prefrontal cortex. We found that the two magnitudes are independently coded by prefrontal neurons. We suggest that the interference among magnitude judgments might depend on the goal rather than the perceptual resource sharing. PMID:27760814

  6. FFT-split-operator code for solving the Dirac equation in 2+1 dimensions

    NASA Astrophysics Data System (ADS)

    Mocken, Guido R.; Keitel, Christoph H.

    2008-06-01

    The main part of the code presented in this work represents an implementation of the split-operator method [J.A. Fleck, J.R. Morris, M.D. Feit, Appl. Phys. 10 (1976) 129-160; R. Heather, Comput. Phys. Comm. 63 (1991) 446] for calculating the time-evolution of Dirac wave functions. It allows to study the dynamics of electronic Dirac wave packets under the influence of any number of laser pulses and its interaction with any number of charged ion potentials. The initial wave function can be either a free Gaussian wave packet or an arbitrary discretized spinor function that is loaded from a file provided by the user. The latter option includes Dirac bound state wave functions. The code itself contains the necessary tools for constructing such wave functions for a single-electron ion. With the help of self-adaptive numerical grids, we are able to study the electron dynamics for various problems in 2+1 dimensions at high spatial and temporal resolutions that are otherwise unachievable. Along with the position and momentum space probability density distributions, various physical observables, such as the expectation values of position and momentum, can be recorded in a time-dependent way. The electromagnetic spectrum that is emitted by the evolving particle can also be calculated with this code. Finally, for planning and comparison purposes, both the time-evolution and the emission spectrum can also be treated in an entirely classical relativistic way. Besides the implementation of the above-mentioned algorithms, the program also contains a large C++ class library to model the geometric algebra representation of spinors that we use for representing the Dirac wave function. This is why the code is called "Dirac++". Program summaryProgram title: Dirac++ or (abbreviated) d++ Catalogue identifier: AEAS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 474 937 No. of bytes in distributed program, including test data, etc.: 4 128 347 Distribution format: tar.gz Programming language: C++ Computer: Any, but SMP systems are preferred Operating system: Linux and MacOS X are actively supported by the current version. Earlier versions were also tested successfully on IRIX and AIX Number of processors used: Generally unlimited, but best scaling with 2-4 processors for typical problems RAM: 160 Megabytes minimum for the examples given here Classification: 2.7 External routines: FFTW Library [3,4], Gnu Scientific Library [5], bzip2, bunzip2 Nature of problem: The relativistic time evolution of wave functions according to the Dirac equation is a challenging numerical task. Especially for an electron in the presence of high intensity laser beams and/or highly charged ions, this type of problem is of considerable interest to atomic physicists. Solution method: The code employs the split-operator method [1,2], combined with fast Fourier transforms (FFT) for calculating any occurring spatial derivatives, to solve the given problem. An autocorrelation spectral method [6] is provided to generate a bound state for use as the initial wave function of further dynamical studies. Restrictions: The code in its current form is restricted to problems in two spatial dimensions. Otherwise it is only limited by CPU time and memory that one can afford to spend on a particular problem. Unusual features: The code features dynamically adapting position and momentum space grids to keep execution time and memory requirements as small as possible. It employs an object-oriented approach, and it relies on a Clifford algebra class library to represent the mathematical objects of the Dirac formalism which we employ. Besides that it includes a feature (typically called "checkpointing") which allows the resumption of an interrupted calculation. Additional comments: Along with the program's source code, we provide several sample configuration files, a pre-calculated bound state wave function, and template files for the analysis of the results with both MatLab and Igor Pro. Running time: Running time ranges from a few minutes for simple tests up to several days, even weeks for real-world physical problems that require very large grids or very small time steps. References:J.A. Fleck, J.R. Morris, M.D. Feit, Time-dependent propagation of high energy laser beams through the atmosphere, Appl. Phys. 10 (1976) 129-160. R. Heather, An asymptotic wavefunction splitting procedure for propagating spatially extended wavefunctions: Application to intense field photodissociation of H +2, Comput. Phys. Comm. 63 (1991) 446. M. Frigo, S.G. Johnson, FFTW: An adaptive software architecture for the FFT, in: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 3, IEEE, 1998, pp. 1381-1384. M. Frigo, S.G. Johnson, The design and implementation of FFTW3, in: Proceedings of the IEEE, vol. 93, IEEE, 2005, pp. 216-231. URL: http://www.fftw.org/. M. Galassi, J. Davies, J. Theiler, B. Gough, G. Jungman, M. Booth, F. Rossi, GNU Scientific Library Reference Manual, second ed., Network Theory Limited, 2006. URL: http://www.gnu.org/software/gsl/. M.D. Feit, J.A. Fleck, A. Steiger, Solution of the Schrödinger equation by a spectral method, J. Comput. Phys. 47 (1982) 412-433.

  7. New shape models of asteroids reconstructed from sparse-in-time photometry

    NASA Astrophysics Data System (ADS)

    Durech, Josef; Hanus, Josef; Vanco, Radim; Oszkiewicz, Dagmara Anna

    2015-08-01

    Asteroid physical parameters - the shape, the sidereal rotation period, and the spin axis orientation - can be reconstructed from the disk-integrated photometry either dense (classical lightcurves) or sparse in time by the lightcurve inversion method. We will review our recent progress in asteroid shape reconstruction from sparse photometry. The problem of finding a unique solution of the inverse problem is time consuming because the sidereal rotation period has to be found by scanning a wide interval of possible periods. This can be efficiently solved by splitting the period parameter space into small parts that are sent to computers of volunteers and processed in parallel. We will show how this approach of distributed computing works with currently available sparse photometry processed in the framework of project Asteroids@home. In particular, we will show the results based on the Lowell Photometric Database. The method produce reliable asteroid models with very low rate of false solutions and the pipelines and codes can be directly used also to other sources of sparse photometry - Gaia data, for example. We will present the distribution of spin axis of hundreds of asteroids, discuss the dependence of the spin obliquity on the size of an asteroid,and show examples of spin-axis distribution in asteroid families that confirm the Yarkovsky/YORP evolution scenario.

  8. Kinetic Simulation and Energetic Neutral Atom Imaging of the Magnetosphere

    NASA Technical Reports Server (NTRS)

    Fok, Mei-Ching H.

    2011-01-01

    Advanced simulation tools and measurement techniques have been developed to study the dynamic magnetosphere and its response to drivers in the solar wind. The Comprehensive Ring Current Model (CRCM) is a kinetic code that solves the 3D distribution in space, energy and pitch-angle information of energetic ions and electrons. Energetic Neutral Atom (ENA) imagers have been carried in past and current satellite missions. Global morphology of energetic ions were revealed by the observed ENA images. We have combined simulation and ENA analysis techniques to study the development of ring current ions during magnetic storms and substorms. We identify the timing and location of particle injection and loss. We examine the evolution of ion energy and pitch-angle distribution during different phases of a storm. In this talk we will discuss the findings from our ring current studies and how our simulation and ENA analysis tools can be applied to the upcoming TRIO-CINAMA mission.

  9. [Neurodynamic Bases of Imitation Learning and Episodic Memory].

    PubMed

    Tsukerman, V D

    2016-01-01

    In this review, three essentially important processes in development of cognitive behavior are considered: knowledge of a spatial environment by means of physical activity, coding and a call of an existential context of episodic memory and imitation learning based on the mirror neural mechanism. The data show that the parietal and frontal system of learning by imitation, allows the developing organism to seize skills of management and motive synergies in perisomatic space, to understand intentions and the purposes of observed actions of other individuals. At the same time a widely distributed parietal and frontal and entorhinal-hippocampal system mediates spatial knowledge and the solution of the navigation tasks important for creation of an existential context of episodic memory.

  10. Pore-scale Simulation and Imaging of Multi-phase Flow and Transport in Porous Media (Invited)

    NASA Astrophysics Data System (ADS)

    Crawshaw, J.; Welch, N.; Daher, I.; Yang, J.; Shah, S.; Grey, F.; Boek, E.

    2013-12-01

    We combine multi-scale imaging and computer simulation of multi-phase flow and reactive transport in rock samples to enhance our fundamental understanding of long term CO2 storage in rock formations. The imaging techniques include Confocal Laser Scanning Microscopy (CLSM), micro-CT and medical CT scanning, with spatial resolutions ranging from sub-micron to mm respectively. First, we report a new sample preparation technique to study micro-porosity in carbonates using CLSM in 3 dimensions. Second, we use micro-CT scanning to generate high resolution 3D pore space images of carbonate and cap rock samples. In addition, we employ micro-CT to image the processes of evaporation in fractures and cap rock degradation due to exposure to CO2 flow. Third, we use medical CT scanning to image spontaneous imbibition in carbonate rock samples. Our imaging studies are complemented by computer simulations of multi-phase flow and transport, using the 3D pore space images obtained from the scanning experiments. We have developed a massively parallel lattice-Boltzmann (LB) code to calculate the single phase flow field in these pore space images. The resulting flow fields are then used to calculate hydrodynamic dispersion using a novel scheme to predict probability distributions for molecular displacements using the LB method and a streamline algorithm, modified for optimal solid boundary conditions. We calculate solute transport on pore-space images of rock cores with increasing degree of heterogeneity: a bead pack, Bentheimer sandstone and Portland carbonate. We observe that for homogeneous rock samples, such as bead packs, the displacement distribution remains Gaussian with time increasing. In the more heterogeneous rocks, on the other hand, the displacement distribution develops a stagnant part. We observe that the fraction of trapped solute increases from the beadpack (0 %) to Bentheimer sandstone (1.5 %) to Portland carbonate (8.1 %), in excellent agreement with PFG-NMR experiments. We then use our preferred multi-phase model to directly calculate flow in pore space images of two different sandstones and observe excellent agreement with experimental relative permeabilities. Also we calculate cluster size distributions in good agreement with experimental studies. Our analysis shows that the simulations are able to predict both multi-phase flow and transport properties directly on large 3D pore space images of real rocks. Pore space images, left and velocity distributions, right (Yang and Boek, 2013)

  11. Adaptation to stimulus statistics in the perception and neural representation of auditory space.

    PubMed

    Dahmen, Johannes C; Keating, Peter; Nodal, Fernando R; Schulz, Andreas L; King, Andrew J

    2010-06-24

    Sensory systems are known to adapt their coding strategies to the statistics of their environment, but little is still known about the perceptual implications of such adjustments. We investigated how auditory spatial processing adapts to stimulus statistics by presenting human listeners and anesthetized ferrets with noise sequences in which interaural level differences (ILD) rapidly fluctuated according to a Gaussian distribution. The mean of the distribution biased the perceived laterality of a subsequent stimulus, whereas the distribution's variance changed the listeners' spatial sensitivity. The responses of neurons in the inferior colliculus changed in line with these perceptual phenomena. Their ILD preference adjusted to match the stimulus distribution mean, resulting in large shifts in rate-ILD functions, while their gain adapted to the stimulus variance, producing pronounced changes in neural sensitivity. Our findings suggest that processing of auditory space is geared toward emphasizing relative spatial differences rather than the accurate representation of absolute position.

  12. Simultaneous Heat and Mass Transfer Model for Convective Drying of Building Material

    NASA Astrophysics Data System (ADS)

    Upadhyay, Ashwani; Chandramohan, V. P.

    2018-04-01

    A mathematical model of simultaneous heat and moisture transfer is developed for convective drying of building material. A rectangular brick is considered for sample object. Finite-difference method with semi-implicit scheme is used for solving the transient governing heat and mass transfer equation. Convective boundary condition is used, as the product is exposed in hot air. The heat and mass transfer equations are coupled through diffusion coefficient which is assumed as the function of temperature of the product. Set of algebraic equations are generated through space and time discretization. The discretized algebraic equations are solved by Gauss-Siedel method via iteration. Grid and time independent studies are performed for finding the optimum number of nodal points and time steps respectively. A MATLAB computer code is developed to solve the heat and mass transfer equations simultaneously. Transient heat and mass transfer simulations are performed to find the temperature and moisture distribution inside the brick.

  13. Temporal Stability of GPS Transmitter Group Delay Variations.

    PubMed

    Beer, Susanne; Wanninger, Lambert

    2018-05-29

    The code observable of global navigation satellite systems (GNSS) is influenced by group delay variations (GDV) of transmitter and receiver antennas. For the Global Positioning System (GPS), the variations can sum up to 1 m in the ionosphere-free linear combination and thus can significantly affect precise code applications. The contribution of the GPS transmitters can amount to 0.8 m peak-to-peak over the entire nadir angle range. To verify the assumption of their time-invariance, we determined daily individual satellite GDV for GPS transmitter antennas over a period of more than two years. Dual-frequency observations of globally distributed reference stations and their multipath combination form the basis for our analysis. The resulting GPS GDV are stable on the level of a few centimeters for C1, P2, and for the ionosphere-free linear combination. Our study reveals that the inconsistencies of the GDV of space vehicle number (SVN) 55 with respect to earlier studies are not caused by temporal instabilities, but are rather related to receiver properties.

  14. Accumulating pyramid spatial-spectral collaborative coding divergence for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Zou, Huanxin; Zhou, Shilin

    2016-03-01

    Detection of anomalous targets of various sizes in hyperspectral data has received a lot of attention in reconnaissance and surveillance applications. Many anomaly detectors have been proposed in literature. However, current methods are susceptible to anomalies in the processing window range and often make critical assumptions about the distribution of the background data. Motivated by the fact that anomaly pixels are often distinctive from their local background, in this letter, we proposed a novel hyperspectral anomaly detection framework for real-time remote sensing applications. The proposed framework consists of four major components, sparse feature learning, pyramid grid window selection, joint spatial-spectral collaborative coding and multi-level divergence fusion. It exploits the collaborative representation difference in the feature space to locate potential anomalies and is totally unsupervised without any prior assumptions. Experimental results on airborne recorded hyperspectral data demonstrate that the proposed methods adaptive to anomalies in a large range of sizes and is well suited for parallel processing.

  15. IUTAM Symposium on Statistical Energy Analysis, 8-11 July 1997, Programme

    DTIC Science & Technology

    1997-01-01

    distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (Maximum200 words) This was the first international scientific gathering devoted...energy flow, continuum dynamics, vibrational energy, statistical energy analysis (SEA) 15. NUMBER OF PAGES 16. PRICE CODE INSECURITY... correlation v=V(ɘ ’• • determination of the correlation n^, =11^, (<?). When harmonic motion and time-average are considered, the following I

  16. OBSERVATIONAL SIGNATURES OF CORONAL LOOP HEATING AND COOLING DRIVEN BY FOOTPOINT SHUFFLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlburg, R. B.; Taylor, B. D.; Einaudi, G.

    The evolution of a coronal loop is studied by means of numerical simulations of the fully compressible three-dimensional magnetohydrodynamic equations using the HYPERION code. The footpoints of the loop magnetic field are advected by random motions. As a consequence, the magnetic field in the loop is energized and develops turbulent nonlinear dynamics characterized by the continuous formation and dissipation of field-aligned current sheets: energy is deposited at small scales where heating occurs. Dissipation is nonuniformly distributed so that only a fraction of the coronal mass and volume gets heated at any time. Temperature and density are highly structured at scalesmore » that, in the solar corona, remain observationally unresolved: the plasma of our simulated loop is multithermal, where highly dynamical hotter and cooler plasma strands are scattered throughout the loop at sub-observational scales. Numerical simulations of coronal loops of 50,000 km length and axial magnetic field intensities ranging from 0.01 to 0.04 T are presented. To connect these simulations to observations, we use the computed number densities and temperatures to synthesize the intensities expected in emission lines typically observed with the Extreme Ultraviolet Imaging Spectrometer on Hinode. These intensities are used to compute differential emission measure distributions using the Monte Carlo Markov Chain code, which are very similar to those derived from observations of solar active regions. We conclude that coronal heating is found to be strongly intermittent in space and time, with only small portions of the coronal loop being heated: in fact, at any given time, most of the corona is cooling down.« less

  17. Real-time validation of receiver state information in optical space-time block code systems.

    PubMed

    Alamia, John; Kurzweg, Timothy

    2014-06-15

    Free space optical interconnect (FSOI) systems are a promising solution to interconnect bottlenecks in high-speed systems. To overcome some sources of diminished FSOI performance caused by close proximity of multiple optical channels, multiple-input multiple-output (MIMO) systems implementing encoding schemes such as space-time block coding (STBC) have been developed. These schemes utilize information pertaining to the optical channel to reconstruct transmitted data. The STBC system is dependent on accurate channel state information (CSI) for optimal system performance. As a result of dynamic changes in optical channels, a system in operation will need to have updated CSI. Therefore, validation of the CSI during operation is a necessary tool to ensure FSOI systems operate efficiently. In this Letter, we demonstrate a method of validating CSI, in real time, through the use of moving averages of the maximum likelihood decoder data, and its capacity to predict the bit error rate (BER) of the system.

  18. Wave-Ice interaction in the Marginal Ice Zone: Toward a Wave-Ocean-Ice Coupled Modeling System

    DTIC Science & Technology

    2015-09-30

    MIZ using WW3 (3 frequency bins, ice retreat in August and ice advance in October); Blue (solid): Based on observations near Antarctica by Meylan...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Wave- Ice interaction in the Marginal Ice Zone: Toward a...Wave-Ocean- Ice Coupled Modeling System W. E. Rogers Naval Research Laboratory, Code 7322 Stennis Space Center, MS 39529 phone: (228) 688-4727

  19. Mathematical Inversion of Lightning Data: Techniques and Applications

    NASA Technical Reports Server (NTRS)

    Koshak, William

    2003-01-01

    A survey of some interesting mathematical inversion studies dealing with radio, optical, and electrostatic measurements of lightning are presented. A discussion of why NASA is interested in lightning, what specific physical properties of lightning are retrieved, and what mathematical techniques are used to perform the retrievals are discussed. In particular, a relatively new multi-station VHF time-of-arrival (TOA) antenna network is now on-line in Northern Alabama and will be discussed. The network, called the Lightning Mapping Array (LMA), employs GPS timing and detects VHF radiation from discrete segments (effectively point emitters) that comprise the channel of lightning strokes within cloud and ground flashes. The LMA supports on-going ground-validation activities of the low Earth orbiting Lightning Imaging Sensor (LIS) satellite developed at NASA Marshall Space Flight Center (MSFC) in Huntsville, Alabama. The LMA also provides detailed studies of the distribution and evolution of thunderstorms and lightning in the Tennessee Valley, and offers interesting comparisons with other meteorological/geophysical datasets. In order to take full advantage of these benefits, it is essential that the LMA channel mapping accuracy (in both space and time) be fully characterized and optimized. A new channel mapping retrieval algorithm is introduced for this purpose. To characterize the spatial distribution of retrieval errors, the algorithm has been applied to analyze literally tens of millions of computer-simulated lightning VHF point sources that have been placed at various ranges, azimuths, and altitudes relative to the LMA network. Statistical results are conveniently summarized in high-resolution, color-coded, error maps.

  20. Simulation of multipactor on the rectangular grooved dielectric surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Libing; Wang, Jianguo, E-mail: wanguiuc@mail.xjtu.edu.cn; Northwest Institute of Nuclear Technology, Xi'an, Shaanxi 710024

    2015-11-15

    Multipactor discharge on the rectangular grooved dielectric surface is simulated self-consistently by using a two-and-a-half dimensional (2.5 D) electrostatic particle-in-cell (PIC) code. Compared with the electromagnetic PIC code, the former can give much more accurate solution for the space charge field caused by the multipactor electrons and the deposited surface charge. According to the rectangular groove width and height, the multipactor can be divided into four models, the spatial distributions of the multipactor electrons and the space charge fields are presented for these models. It shows that the rectangular groove in different models gives very different suppression effect on themore » multipactor, effective and efficient suppression on the multipactor can only be reached with a proper groove size.« less

  1. Towards Flange-to-Flange Turbopump Simulations for Liquid Rocket Engines

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Williams, Robert

    2000-01-01

    The primary objective of this research is to support the design of liquid rocket systems for the Advanced Space Transportation System. Since the space launch systems in the near future are likely to rely on liquid rocket engines, increasing the efficiency and reliability of the engine components is an important task. One of the major problems in the liquid rocket engine is to understand fluid dynamics of fuel and oxidizer flows from the fuel tank to plume. Understanding the flow through the entire turbopump geometry through numerical simulation will be of significant value toward design. This will help to improve safety of future space missions. One of the milestones of this effort is to develop, apply and demonstrate the capability and accuracy of 3D CFD methods as efficient design analysis tools on high performance computer platforms. The development of the MPI and MLP versions of the INS3D code is currently underway. The serial version of INS3D code is a multidimensional incompressible Navier-Stokes solver based on overset grid technology. INS3D-MPI is based on the explicit massage-passing interface across processors and is primarily suited for distributed memory systems. INS3D-MLP is based on multi-level parallel method and is suitable for distributed-shared memory systems. For the entire turbopump simulations, moving boundary capability and an efficient time-accurate integration methods are build in the flow solver. To handle the geometric complexity and moving boundary problems, overset grid scheme is incorporated with the solver that new connectivity data will be obtained at each time step. The Chimera overlapped grid scheme allows subdomains move relative to each other, and provides a great flexibility when the boundary movement creates large displacements. The performance of the two time integration schemes for time-accurate computations is investigated. For an unsteady flow which requires small physical time step, the pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive. The current geometry for the LOX boost turbopump has various rotating and stationary components, such as inducer, stators, kicker, hydrolic turbine, where the flow is extremely unsteady. Figure 1 shows the geometry and computed surface pressure of the inducer. The inducer and the hydrolic turbine rotate in different rotational speed.

  2. Calculation of Multistage Turbomachinery Using Steady Characteristic Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.

    1998-01-01

    A multiblock Navier-Stokes analysis code for turbomachinery has been modified to allow analysis of multistage turbomachines. A steady averaging-plane approach was used to pass information between blade rows. Characteristic boundary conditions written in terms of perturbations about the mean flow from the neighboring blade row were used to allow close spacing between the blade rows without forcing the flow to be axisymmetric. In this report the multiblock code is described briefly and the characteristic boundary conditions and the averaging-plane implementation are described in detail. Two approaches for averaging the flow properties are also described. A two-dimensional turbine stator case was used to compare the characteristic boundary conditions with standard axisymmetric boundary conditions. Differences were apparent but small in this low-speed case. The two-stage fuel turbine used on the space shuttle main engines was then analyzed using a three-dimensional averaging-plane approach. Computed surface pressure distributions on the stator blades and endwalls and computed distributions of blade surface heat transfer coefficient on three blades showed very good agreement with experimental data from two tests.

  3. mocca code for star cluster simulations - VI. Bimodal spatial distribution of blue stragglers

    NASA Astrophysics Data System (ADS)

    Hypki, Arkadiusz; Giersz, Mirek

    2017-11-01

    The paper presents an analysis of formation mechanism and properties of spatial distributions of blue stragglers in evolving globular clusters, based on numerical simulations done with the mocca code. First, there are presented N-body and mocca simulations which try to reproduce the simulations presented by Ferraro et al. (2012). Then, we show the agreement between N-body and the mocca code. Finally, we discuss the formation process of the bimodal distribution. We report that we could not reproduce simulations from Ferraro et al. (2012). Moreover, we show that the so-called bimodal spatial distribution of blue stragglers is a very transient feature. It is formed for one snapshot in time and it can easily vanish in the next one. Moreover, we show that the radius of avoidance proposed by Ferraro et al. (2012) goes out of sync with the apparent minimum of the bimodal distribution after about two half-mass relaxation times (without finding out what is the reason for that). This finding creates a real challenge for the dynamical clock, which uses this radius to determine the dynamical age of globular clusters. Additionally, the paper discusses a few important problems concerning the apparent visibilities of the bimodal distributions, which have to be taken into account while studying the spatial distributions of blue stragglers.

  4. Fast computation of quadrupole and hexadecapole approximations in microlensing with a single point-source evaluation

    NASA Astrophysics Data System (ADS)

    Cassan, Arnaud

    2017-07-01

    The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.

  5. The differential algebra based multiple level fast multipole algorithm for 3D space charge field calculation and photoemission simulation

    DOE PAGES

    None, None

    2015-09-28

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics.more » In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.« less

  6. Laser based bi-directional Gbit ground links with the Tesat transportable adaptive optical ground station

    NASA Astrophysics Data System (ADS)

    Heine, Frank; Saucke, Karen; Troendle, Daniel; Motzigemba, Matthias; Bischl, Hermann; Elser, Dominique; Marquardt, Christoph; Henninger, Hennes; Meyer, Rolf; Richter, Ines; Sodnik, Zoran

    2017-02-01

    Optical ground stations can be an alternative to radio frequency based transmit (forward) and receive (return) systems for data relay services and other applications including direct to earth optical communications from low earth orbit spacecrafts, deep space receivers, space based quantum key distribution systems and Tbps capacity feeder links to geostationary spacecrafts. The Tesat Transportable Adaptive Optical Ground Station is operational since September 2015 at the European Space Agency site in Tenerife, Spain.. This paper reports about the results of the 2016 experimental campaigns including the characterization of the optical channel from Tenerife for an optimized coding scheme, the performance of the T-AOGS under different atmospheric conditions and the first successful measurements of the suitability of the Alphasat LCT optical downlink performance for future continuous variable quantum key distribution systems.

  7. XMDS2: Fast, scalable simulation of coupled stochastic partial differential equations

    NASA Astrophysics Data System (ADS)

    Dennis, Graham R.; Hope, Joseph J.; Johnsson, Mattias T.

    2013-01-01

    XMDS2 is a cross-platform, GPL-licensed, open source package for numerically integrating initial value problems that range from a single ordinary differential equation up to systems of coupled stochastic partial differential equations. The equations are described in a high-level XML-based script, and the package generates low-level optionally parallelised C++ code for the efficient solution of those equations. It combines the advantages of high-level simulations, namely fast and low-error development, with the speed, portability and scalability of hand-written code. XMDS2 is a complete redesign of the XMDS package, and features support for a much wider problem space while also producing faster code. Program summaryProgram title: XMDS2 Catalogue identifier: AENK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 2 No. of lines in distributed program, including test data, etc.: 872490 No. of bytes in distributed program, including test data, etc.: 45522370 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer with a Unix-like system, a C++ compiler and Python. Operating system: Any Unix-like system; developed under Mac OS X and GNU/Linux. RAM: Problem dependent (roughly 50 bytes per grid point) Classification: 4.3, 6.5. External routines: The external libraries required are problem-dependent. Uses FFTW3 Fourier transforms (used only for FFT-based spectral methods), dSFMT random number generation (used only for stochastic problems), MPI message-passing interface (used only for distributed problems), HDF5, GNU Scientific Library (used only for Bessel-based spectral methods) and a BLAS implementation (used only for non-FFT-based spectral methods). Nature of problem: General coupled initial-value stochastic partial differential equations. Solution method: Spectral method with method-of-lines integration Running time: Determined by the size of the problem

  8. TSPP - A Collection of FORTRAN Programs for Processing and Manipulating Time Series

    USGS Publications Warehouse

    Boore, David M.

    2008-01-01

    This report lists a number of FORTRAN programs that I have developed over the years for processing and manipulating strong-motion accelerograms. The collection is titled TSPP, which stands for Time Series Processing Programs. I have excluded 'strong-motion accelerograms' from the title, however, as the boundary between 'strong' and 'weak' motion has become blurred with the advent of broadband sensors and high-dynamic range dataloggers, and many of the programs can be used with any evenly spaced time series, not just acceleration time series. This version of the report is relatively brief, consisting primarily of an annotated list of the programs, with two examples of processing, and a few comments on usage. I do not include a parameter-by-parameter guide to the programs. Future versions might include more examples of processing, illustrating the various parameter choices in the programs. Although these programs have been used by the U.S. Geological Survey, no warranty, expressed or implied, is made by the USGS as to the accuracy or functioning of the programs and related program material, nor shall the fact of distribution constitute any such warranty, and no responsibility is assumed by the USGS in connection therewith. The programs are distributed on an 'as is' basis, with no warranty of support from me. These programs were written for my use and are being publically distributed in the hope that others might find them as useful as I have. I would, however, appreciate being informed about bugs, and I always welcome suggestions for improvements to the codes. Please note that I have made little effort to optimize the coding of the programs or to include a user-friendly interface (many of the programs in this collection have been included in the software usdp (Utility Software for Data Processing), being developed by Akkar et al. (personal communication, 2008); usdp includes a graphical user interface). Speed of execution has been sacrificed in favor of a code that is intended to be easy to understand, although on modern computers speed of execution is rarely a problem. I will be pleased if users incorporate portions of my programs into their own applications; I only ask that reference be made to this report as the source of the programs.

  9. Real-time realizations of the Bayesian Infrasonic Source Localization Method

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.

    2015-12-01

    The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.

  10. Time coded distribution via broadcasting stations

    NASA Technical Reports Server (NTRS)

    Leschiutta, S.; Pettiti, V.; Detoma, E.

    1979-01-01

    The distribution of standard time signals via AM and FM broadcasting stations presents the distinct advantages to offer a wide area coverage and to allow the use of inexpensive receivers, but the signals are radiated a limited number of times per day, are not usually available during the night, and no full and automatic synchronization of a remote clock is possible. As an attempt to overcome some of these problems, a time coded signal with a complete date information is diffused by the IEN via the national broadcasting networks in Italy. These signals are radiated by some 120 AM and about 3000 FM and TV transmitters around the country. In such a way, a time ordered system with an accuracy of a couple of milliseconds is easily achieved.

  11. Zipf's Law in Short-Time Timbral Codings of Speech, Music, and Environmental Sound Signals

    PubMed Central

    Haro, Martín; Serrà, Joan; Herrera, Perfecto; Corral, Álvaro

    2012-01-01

    Timbre is a key perceptual feature that allows discrimination between different sounds. Timbral sensations are highly dependent on the temporal evolution of the power spectrum of an audio signal. In order to quantitatively characterize such sensations, the shape of the power spectrum has to be encoded in a way that preserves certain physical and perceptual properties. Therefore, it is common practice to encode short-time power spectra using psychoacoustical frequency scales. In this paper, we study and characterize the statistical properties of such encodings, here called timbral code-words. In particular, we report on rank-frequency distributions of timbral code-words extracted from 740 hours of audio coming from disparate sources such as speech, music, and environmental sounds. Analogously to text corpora, we find a heavy-tailed Zipfian distribution with exponent close to one. Importantly, this distribution is found independently of different encoding decisions and regardless of the audio source. Further analysis on the intrinsic characteristics of most and least frequent code-words reveals that the most frequent code-words tend to have a more homogeneous structure. We also find that speech and music databases have specific, distinctive code-words while, in the case of the environmental sounds, this database-specific code-words are not present. Finally, we find that a Yule-Simon process with memory provides a reasonable quantitative approximation for our data, suggesting the existence of a common simple generative mechanism for all considered sound sources. PMID:22479497

  12. Vibration Response Models of a Stiffened Aluminum Plate Excited by a Shaker

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph H.

    2008-01-01

    Numerical models of structural-acoustic interactions are of interest to aircraft designers and the space program. This paper describes a comparison between two energy finite element codes, a statistical energy analysis code, a structural finite element code, and the experimentally measured response of a stiffened aluminum plate excited by a shaker. Different methods for modeling the stiffeners and the power input from the shaker are discussed. The results show that the energy codes (energy finite element and statistical energy analysis) accurately predicted the measured mean square velocity of the plate. In addition, predictions from an energy finite element code had the best spatial correlation with measured velocities. However, predictions from a considerably simpler, single subsystem, statistical energy analysis model also correlated well with the spatial velocity distribution. The results highlight a need for further work to understand the relationship between modeling assumptions and the prediction results.

  13. An efficient HZETRN (a galactic cosmic ray transport code)

    NASA Technical Reports Server (NTRS)

    Shinn, Judy L.; Wilson, John W.

    1992-01-01

    An accurate and efficient engineering code for analyzing the shielding requirements against the high-energy galactic heavy ions is needed. The HZETRN is a deterministic code developed at Langley Research Center that is constantly under improvement both in physics and numerical computation and is targeted for such use. One problem area connected with the space-marching technique used in this code is the propagation of the local truncation error. By improving the numerical algorithms for interpolation, integration, and grid distribution formula, the efficiency of the code is increased by a factor of eight as the number of energy grid points is reduced. The numerical accuracy of better than 2 percent for a shield thickness of 150 g/cm(exp 2) is found when a 45 point energy grid is used. The propagating step size, which is related to the perturbation theory, is also reevaluated.

  14. A Data Parallel Multizone Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1995-01-01

    We have developed a data parallel multizone compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the "chimera" approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. The design choices can be summarized as: 1. finite differences on structured grids; 2. implicit time-stepping with either distributed solves or data motion and local solves; 3. sequential stepping through multiple zones with interzone data transfer via a distributed data structure. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran (HPF). One interesting feature is the issue of turbulence modeling, where the architecture of a parallel machine makes the use of an algebraic turbulence model awkward, whereas models based on transport equations are more natural. We will present some performance figures for the code on the CM-5, and consider the issues involved in transitioning the code to HPF for portability to other parallel platforms.

  15. Numerical study of alfvénic wave activity in the solar wind as a cause for pitch angle scattering with focus on kinetic processes

    NASA Astrophysics Data System (ADS)

    Keilbach, D.; Berger, L.; Drews, C.; Marsch, E.; Wimmer-Schweingruber, R. F.

    2017-12-01

    Recent studies, that determined the inflow longitude of the local interstellar medium from the anisotropy of interstellar pickup ion (PUI) radial velocity, have once again raised the question, how transport effects and especially wave activity in the solar wind modifies the velocity distribution function of PUIs.This study investigates the modification of an oxygen PUI torus distribution by alfvénic waves qualitatively with a numerical approach. The focus of this study is to understand this modification kinetically, which means, that instead of describing the PUI transport through diffusion approaches, we trace the trajectories of test particles in pitch angle space with a time resolution of at least 100 time steps per gyro orbit in order to find first principles of wave particle interactions on the most basic scale.Therefore we have implemented a Leapfrog solver of the Lorentz-Newton equations of motion for a charged test particle in a electro-magnetic field. The alfvénic waves were represented through a continuous circularly polarized wave superimposed to a constant 5 nT background magnetic field. In addition an electric field arising from induction has been added to the simulation's boundary conditions. The simulation code computes the particles' trajectories in the solar wind bulk system.Upon interaction with mono frequent single-frequency waves, the particles are found to perform stationary trajectories in pitch angle space, so that the pitch angle distribution of a conglomerate of test particles does not experience a systematic broadening over time. Also the particles do not react most strongly with waves at resonant frequencies, since the pitch angle modification by the waves sweeps their parallel velocity out of resonance quickly. However, within frequencies close to first order resonance, strong interactions between waves and particles are observed.Altogether the framework of our simulation is readily expandable to simulate additional effects, which may modify the test particles' pitch angle distribution strongly (e.g. collisions with solar wind particles or gradient drifts). So far we have expanded the simulation to support intermittent waves, where we have observed, that the pitch angle distribution of the test particles broadens systematically over time.

  16. Recent update of the RPLUS2D/3D codes

    NASA Technical Reports Server (NTRS)

    Tsai, Y.-L. Peter

    1991-01-01

    The development of the RPLUS2D/3D codes is summarized. These codes utilize LU algorithms to solve chemical non-equilibrium flows in a body-fitted coordinate system. The motivation behind the development of these codes is the need to numerically predict chemical non-equilibrium flows for the National AeroSpace Plane Program. Recent improvements include vectorization method, blocking algorithms for geometric flexibility, out-of-core storage for large-size problems, and an LU-SW/UP combination for CPU-time efficiency and solution quality.

  17. Bumper 3 Update for IADC Protection Manual

    NASA Technical Reports Server (NTRS)

    Christiansen, Eric L.; Nagy, Kornel; Hyde, Jim

    2016-01-01

    The Bumper code has been the standard in use by NASA and contractors to perform meteoroid/debris risk assessments since 1990. It has undergone extensive revisions and updates [NASA JSC HITF website; Christiansen et al., 1992, 1997]. NASA Johnson Space Center (JSC) has applied BUMPER to risk assessments for Space Station, Shuttle, Mir, Extravehicular Mobility Units (EMU) space suits, and other spacecraft (e.g., LDEF, Iridium, TDRS, and Hubble Space Telescope). Bumper continues to be updated with changes in the ballistic limit equations describing failure threshold of various spacecraft components, as well as changes in the meteoroid and debris environment models. Significant efforts are expended to validate Bumper and benchmark it to other meteoroid/debris risk assessment codes. Bumper 3 is a refactored version of Bumper II. The structure of the code was extensively modified to improve maintenance, performance and flexibility. The architecture was changed to separate the frequently updated ballistic limit equations from the relatively stable common core functions of the program. These updates allow NASA to produce specific editions of the Bumper 3 that are tailored for specific customer requirements. The core consists of common code necessary to process the Micrometeoroid and Orbital Debris (MMOD) environment models, assess shadowing and calculate MMOD risk. The library of target response subroutines includes a board range of different types of MMOD shield ballistic limit equations as well as equations describing damage to various spacecraft subsystems or hardware (thermal protection materials, windows, radiators, solar arrays, cables, etc.). The core and library of ballistic response subroutines are maintained under configuration control. A change in the core will affect all editions of the code, whereas a change in one or more of the response subroutines will affect all editions of the code that contain the particular response subroutines which are modified. Note that the Bumper II program is no longer maintained or distributed by NASA.

  18. Toward enhancing the distributed video coder under a multiview video codec framework

    NASA Astrophysics Data System (ADS)

    Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua

    2016-11-01

    The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.

  19. OFFSET - RAY TRACING OPTICAL ANALYSIS OF OFFSET SOLAR COLLECTOR FOR SPACE STATION SOLAR DYNAMIC POWER SYSTEM

    NASA Technical Reports Server (NTRS)

    Jefferies, K.

    1994-01-01

    OFFSET is a ray tracing computer code for optical analysis of a solar collector. The code models the flux distributions within the receiver cavity produced by reflections from the solar collector. It was developed to model the offset solar collector of the solar dynamic electric power system being developed for Space Station Freedom. OFFSET has been used to improve the understanding of the collector-receiver interface and to guide the efforts of NASA contractors also researching the optical components of the power system. The collector for Space Station Freedom consists of 19 hexagonal panels each containing 24 triangular, reflective facets. Current research is geared toward optimizing flux distribution inside the receiver via changes in collector design and receiver orientation. OFFSET offers many options for experimenting with the design of the system. The offset parabolic collector model configuration is determined by an input file of facet corner coordinates. The user may choose other configurations by changing this file, but to simulate collectors that have other than 19 groups of 24 triangular facets would require modification of the FORTRAN code. Each of the roughly 500 facets in the assembled collector may be independently aimed to smooth out, or tailor, the flux distribution on the receiver's wall. OFFSET simulates the effects of design changes such as in receiver aperture location, tilt angle, and collector facet contour. Unique features of OFFSET include: 1) equations developed to pseudo-randomly select ray originating sources on the Sun which appear evenly distributed and include solar limb darkening; 2) Cone-optics technique used to add surface specular error to the ray originating sources to determine the apparent ray sources of the reflected sun; 3) choice of facet reflective surface contour -- spherical, ideal parabolic, or toroidal; 4) Gaussian distributions of radial and tangential components of surface slope error added to the surface normals at the ten nodal points on each facet; and 5) color contour plots of receiver incident flux distribution generated by PATRAN processing of FORTRAN computer code output. OFFSET output includes a file of input data for confirmation, a PATRAN results file containing the values necessary to plot the flux distribution at the receiver surface, a PATRAN results file containing the intensity distribution on a 40 x 40 cm area of the receiver aperture plane, a data file containing calculated information on the system configuration, a file including the X-Y coordinates of the target points of each collector facet on the aperture opening, and twelve P/PLOT input data files to allow X-Y plotting of various results data. OFFSET is written in FORTRAN (70%) for the IBM VM operating system. The code contains PATRAN statements (12%) and P/PLOT statements (18%) for generating plots. Once the program has been run on VM (or an equivalent system), the PATRAN and P/PLOT files may be transferred to a DEC VAX (or equivalent system) with access to PATRAN for PATRAN post processing. OFFSET was written in 1988 and last updated in 1989. PATRAN is a registered trademark of PDA Engineering. IBM is a registered trademark of International Business Machines Corporation. DEC VAX is a registered trademark of Digital Equipment Corporation.

  20. Four-Dimensional Continuum Gyrokinetic Code: Neoclassical Simulation of Fusion Edge Plasmas

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.

    2005-10-01

    We are developing a continuum gyrokinetic code, TEMPEST, to simulate edge plasmas. Our code represents velocity space via a grid in equilibrium energy and magnetic moment variables, and configuration space via poloidal magnetic flux and poloidal angle. The geometry is that of a fully diverted tokamak (single or double null) and so includes boundary conditions for both closed magnetic flux surfaces and open field lines. The 4-dimensional code includes kinetic electrons and ions, and electrostatic field-solver options, and simulates neoclassical transport. The present implementation is a Method of Lines approach where spatial finite-differences (higher order upwinding) and implicit time advancement are used. We present results of initial verification and validation studies: transition from collisional to collisionless limits of parallel end-loss in the scrape-off layer, self-consistent electric field, and the effect of the real X-point geometry and edge plasma conditions on the standard neoclassical theory, including a comparison of our 4D code with other kinetic neoclassical codes and experiments.

  1. Dynamically Alterable Arrays of Polymorphic Data Types

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    An application library package was developed that represents data packets for Deep Space Network (DSN) message packets as dynamically alterable arrays composed of arbitrary polymorphic data types. The software was to address a limitation of the present state of the practice for having an array directly composed of a single monomorphic data type. This is a severe limitation when one is dealing with science data in that the types of objects one is dealing with are typically not known in advance and, therefore, are dynamic in nature. The unique feature of this approach is that it enables one to define at run-time the dynamic shape of the matrix with the ability to store polymorphic data types in each of its indices. Existing languages such as C and C++ have the restriction that the shape of the array must be known in advance and each of its elements be a monomorphic data type that is strictly defined at compile-time. This program can be executed on a variety of platforms. It can be distributed in either source code or binary code form. It must be run in conjunction with any one of a number of Lisp compilers that are available commercially or as shareware.

  2. Exact Magnetic Diffusion Solutions for Magnetohydrodynamic Code Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, D S

    In this paper, the authors present several new exact analytic space and time dependent solutions to the problem of magnetic diffusion in R-Z geometry. These problems serve to verify several different elements of an MHD implementation: magnetic diffusion, external circuit time integration, current and voltage energy sources, spatially dependent conductivities, and ohmic heating. The exact solutions are shown in comparison with 2D simulation results from the Ares code.

  3. Distributed intelligent control and status networking

    NASA Technical Reports Server (NTRS)

    Fortin, Andre; Patel, Manoj

    1993-01-01

    Over the past two years, the Network Control Systems Branch (Code 532) has been investigating control and status networking technologies. These emerging technologies use distributed processing over a network to accomplish a particular custom task. These networks consist of small intelligent 'nodes' that perform simple tasks. Containing simple, inexpensive hardware and software, these nodes can be easily developed and maintained. Once networked, the nodes can perform a complex operation without a central host. This type of system provides an alternative to more complex control and status systems which require a central computer. This paper will provide some background and discuss some applications of this technology. It will also demonstrate the suitability of one particular technology for the Space Network (SN) and discuss the prototyping activities of Code 532 utilizing this technology.

  4. SU-F-T-366: Dosimetric Parameters Enhancement of 120-Leaf Millennium MLC Using EGSnrc and IAEA Phase-Space Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haddad, K; Alopoor, H

    Purpose: Recently, the multileaf collimators (MLC) have become an important part of any LINAC collimation systems because they reduce the treatment planning time and improves the conformity. Important factors that affects the MLCs collimation performance are leaves material composition and their thickness. In this study, we investigate the main dosimetric parameters of 120-leaf Millennium MLC including dose in the buildup point, physical penumbra as well as average and end leaf leakages. Effects of the leaves geometry and density on these parameters are evaluated Methods: From EGSnrc Monte Carlo code, BEAMnrc and DOSXYZnrc modules are used to evaluate the dosimetric parametersmore » of a water phantom exposed to a Varian xi for 100cm SSD. Using IAEA phasespace data just above MLC (Z=46cm) and BEAMnrc, for the modified 120-leaf Millennium MLC a new phase space data at Z=52cm is produces. The MLC is modified both in leaf thickness and material composition. EGSgui code generates 521ICRU library for tungsten alloys. DOSXYZnrc with the new phase space evaluates the dose distribution in a water phantom of 60×60×20 cm3 with voxel size of 4×4×2 mm3. Using DOSXYZnrc dose distributions for open beam and closed beam as well as the leakages definition, end leakage, average leakage and physical penumbra are evaluated. Results: A new MLC with improved dosimetric parameters is proposed. The physical penumbra for proposed MLC is 4.7mm compared to 5.16 mm for Millennium. Average leakage in our design is reduced to 1.16% compared to 1.73% for Millennium, the end leaf leakage suggested design is also reduced to 4.86% compared to 7.26% of Millennium. Conclusion: The results show that the proposed MLC with enhanced dosimetric parameters could improve the conformity of treatment planning.« less

  5. Assessing the Associations Between Types of Green Space, Physical Activity, and Health Indicators Using GIS and Participatory Survey

    NASA Astrophysics Data System (ADS)

    Akpinar, A.

    2017-11-01

    This study explores whether specific types of green spaces (i.e. urban green spaces, forests, agricultural lands, rangelands, and wetlands) are associated with physical activity, quality of life, and cardiovascular disease prevalence. A sample of 8,976 respondents from the Behavioral Risk Factor Surveillance System, conducted in 2006 in Washington State across 291 zip-codes, was analyzed. Measures included physical activity status, quality of life, and cardiovascular disease prevalence (i.e. heart attack, angina, and stroke). Percentage of green spaces was derived from the National Land Cover Dataset and measured with Geographical Information System. Multilevel regression analyses were conducted to analyze the data while controlling for age, sex, race, weight, marital status, occupation, income, education level, and zip-code population and socio-economic situation. Regression results reveal that no green space types were associated with physical activity, quality of life, and cardiovascular disease prevalence. On the other hand, the analysis shows that physical activity was associated with general health, quality of life, and cardiovascular disease prevalence. The findings suggest that other factors such as size, structure and distribution (sprawled or concentrated, large or small), quality, and characteristics of green space might be important in general health, quality of life, and cardiovascular disease prevalence rather than green space types. Therefore, further investigations are needed.

  6. A Concept for Run-Time Support of the Chapel Language

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    A document presents a concept for run-time implementation of other concepts embodied in the Chapel programming language. (Now undergoing development, Chapel is intended to become a standard language for parallel computing that would surpass older such languages in both computational performance in the efficiency with which pre-existing code can be reused and new code written.) The aforementioned other concepts are those of distributions, domains, allocations, and access, as defined in a separate document called "A Semantic Framework for Domains and Distributions in Chapel" and linked to a language specification defined in another separate document called "Chapel Specification 0.3." The concept presented in the instant report is recognition that a data domain that was invented for Chapel offers a novel approach to distributing and processing data in a massively parallel environment. The concept is offered as a starting point for development of working descriptions of functions and data structures that would be necessary to implement interfaces to a compiler for transforming the aforementioned other concepts from their representations in Chapel source code to their run-time implementations.

  7. Comparative modelling of lower hybrid current drive with two launcher designs in the Tore Supra tokamak

    NASA Astrophysics Data System (ADS)

    Nilsson, E.; Decker, J.; Peysson, Y.; Artaud, J.-F.; Ekedahl, A.; Hillairet, J.; Aniel, T.; Basiuk, V.; Goniche, M.; Imbeaux, F.; Mazon, D.; Sharma, P.

    2013-08-01

    Fully non-inductive operation with lower hybrid current drive (LHCD) in the Tore Supra tokamak is achieved using either a fully active multijunction (FAM) launcher or a more recent ITER-relevant passive active multijunction (PAM) launcher, or both launchers simultaneously. While both antennas show comparable experimental efficiencies, the analysis of stability properties in long discharges suggest different current profiles. We present comparative modelling of LHCD with the two different launchers to characterize the effect of the respective antenna spectra on the driven current profile. The interpretative modelling of LHCD is carried out using a chain of codes calculating, respectively, the global discharge evolution (tokamak simulator METIS), the spectrum at the antenna mouth (LH coupling code ALOHA), the LH wave propagation (ray-tracing code C3PO), and the distribution function (3D Fokker-Planck code LUKE). Essential aspects of the fast electron dynamics in time, space and energy are obtained from hard x-ray measurements of fast electron bremsstrahlung emission using a dedicated tomographic system. LHCD simulations are validated by systematic comparisons between these experimental measurements and the reconstructed signal calculated by the code R5X2 from the LUKE electron distribution. An excellent agreement is obtained in the presence of strong Landau damping (found under low density and high-power conditions in Tore Supra) for which the ray-tracing model is valid for modelling the LH wave propagation. Two aspects of the antenna spectra are found to have a significant effect on LHCD. First, the driven current is found to be proportional to the directivity, which depends upon the respective weight of the main positive and main negative lobes and is particularly sensitive to the density in front of the antenna. Second, the position of the main negative lobe in the spectrum is different for the two launchers. As this lobe drives a counter-current, the resulting driven current profile is also different for the FAM and PAM launchers.

  8. Thrust chamber performance using Navier-Stokes solution. [space shuttle main engine viscous nozzle calculation

    NASA Technical Reports Server (NTRS)

    Chan, J. S.; Freeman, J. A.

    1984-01-01

    The viscous, axisymmetric flow in the thrust chamber of the space shuttle main engine (SSME) was computed on the CRAY 205 computer using the general interpolants method (GIM) code. Results show that the Navier-Stokes codes can be used for these flows to study trends and viscous effects as well as determine flow patterns; but further research and development is needed before they can be used as production tools for nozzle performance calculations. The GIM formulation, numerical scheme, and computer code are described. The actual SSME nozzle computation showing grid points, flow contours, and flow parameter plots is discussed. The computer system and run times/costs are detailed.

  9. Investigation of HZETRN 2010 as a Tool for Single Event Effect Qualification of Avionics Systems

    NASA Technical Reports Server (NTRS)

    Rojdev, Kristina; Koontz, Steve; Atwell, William; Boeder, Paul

    2014-01-01

    NASA's future missions are focused on long-duration deep space missions for human exploration which offers no options for a quick emergency return to Earth. The combination of long mission duration with no quick emergency return option leads to unprecedented spacecraft system safety and reliability requirements. It is important that spacecraft avionics systems for human deep space missions are not susceptible to Single Event Effect (SEE) failures caused by space radiation (primarily the continuous galactic cosmic ray background and the occasional solar particle event) interactions with electronic components and systems. SEE effects are typically managed during the design, development, and test (DD&T) phase of spacecraft development by using heritage hardware (if possible) and through extensive component level testing, followed by system level failure analysis tasks that are both time consuming and costly. The ultimate product of the SEE DD&T program is a prediction of spacecraft avionics reliability in the flight environment produced using various nuclear reaction and transport codes in combination with the component and subsystem level radiation test data. Previous work by Koontz, et al.1 utilized FLUKA, a Monte Carlo nuclear reaction and transport code, to calculate SEE and single event upset (SEU) rates. This code was then validated against in-flight data for a variety of spacecraft and space flight environments. However, FLUKA has a long run-time (on the order of days). CREME962, an easy to use deterministic code offering short run times, was also compared with FLUKA predictions and in-flight data. CREME96, though fast and easy to use, has not been updated in several years and underestimates secondary particle shower effects in spacecraft structural shielding mass. Thus, this paper will investigate the use of HZETRN 20103, a fast and easy to use deterministic transport code, similar to CREME96, that was developed at NASA Langley Research Center primarily for flight crew ionizing radiation dose assessments. HZETRN 2010 includes updates to address secondary particle shower effects more accurately, and might be used as another tool to verify spacecraft avionics system reliability in space flight SEE environments.

  10. Methods for Intelligent Mapping of the IPV6 Address Space

    DTIC Science & Technology

    2015-03-01

    the " Internet of Things " ( IoT ). (2013, Jan. 7). Forbes. [Online]. Available: http://www.forbes.com/sites/quora/2013/01/07/ how-many- things -are...currently-connected-to-the- internet -of- things - iot / 57 [13] G. Huston, “IPv4 address report,” Mar 2015. [Online]. Available: http://www.potaroo.net/tools/ipv4...distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Due to the rapid growth of the Internet , the available pool of unique

  11. Support for User Interfaces for Distributed Systems

    NASA Technical Reports Server (NTRS)

    Eychaner, Glenn; Niessner, Albert

    2005-01-01

    An extensible Java(TradeMark) software framework supports the construction and operation of graphical user interfaces (GUIs) for distributed computing systems typified by ground control systems that send commands to, and receive telemetric data from, spacecraft. Heretofore, such GUIs have been custom built for each new system at considerable expense. In contrast, the present framework affords generic capabilities that can be shared by different distributed systems. Dynamic class loading, reflection, and other run-time capabilities of the Java language and JavaBeans component architecture enable the creation of a GUI for each new distributed computing system with a minimum of custom effort. By use of this framework, GUI components in control panels and menus can send commands to a particular distributed system with a minimum of system-specific code. The framework receives, decodes, processes, and displays telemetry data; custom telemetry data handling can be added for a particular system. The framework supports saving and later restoration of users configurations of control panels and telemetry displays with a minimum of effort in writing system-specific code. GUIs constructed within this framework can be deployed in any operating system with a Java run-time environment, without recompilation or code changes.

  12. Role of Off-Line-of-Sight Propagation in Geomagnetic EMP Formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kruger, Hans W.

    The author’s synchrotron radiation based 3D geomagnetic EMP code MACSYNC has been used to explore the impact on pulse rise time and air conductivity of EMP propagation paths to the observer that are located off the direct line-of-sight (LOS) between gamma source and observer. This geometry is always present because, for an isotropic source, most the gammas are emitted at an angle with respect to the LOS. Computations for a 1 kt near-surface burst observed from space yield two principal findings: 1. The rise time is generated by the combined actions of a) electron spreading along the LOS due tomore » the Compton electron emission angular distribution folded with electron multiple scattering effects, and b) radiation arrival time spreading due to length differences for different off-LOS propagation paths. The pulse rise time does not depend on the rise time of the conductivity. The conductivity rise time determines the pulse amplitude. 2. One-dimensional legacy EMP codes are inherently incapable of producing the correct pulse shape because they cannot treat the dependence of the conductivity on two dimensions, i.e. the radius from the source and the angle of the propagation path with the LOS. This divergence from one-dimensionality begins at a small fraction of a nanosecond for a sea-level burst. This effect will also be present in high-altitude bursts, however, determination of its onset time and magnitude requires high-altitude computations which have not yet been done.« less

  13. Smart photonic networks and computer security for image data

    NASA Astrophysics Data System (ADS)

    Campello, Jorge; Gill, John T.; Morf, Martin; Flynn, Michael J.

    1998-02-01

    Work reported here is part of a larger project on 'Smart Photonic Networks and Computer Security for Image Data', studying the interactions of coding and security, switching architecture simulations, and basic technologies. Coding and security: coding methods that are appropriate for data security in data fusion networks were investigated. These networks have several characteristics that distinguish them form other currently employed networks, such as Ethernet LANs or the Internet. The most significant characteristics are very high maximum data rates; predominance of image data; narrowcasting - transmission of data form one source to a designated set of receivers; data fusion - combining related data from several sources; simple sensor nodes with limited buffering. These characteristics affect both the lower level network design and the higher level coding methods.Data security encompasses privacy, integrity, reliability, and availability. Privacy, integrity, and reliability can be provided through encryption and coding for error detection and correction. Availability is primarily a network issue; network nodes must be protected against failure or routed around in the case of failure. One of the more promising techniques is the use of 'secret sharing'. We consider this method as a special case of our new space-time code diversity based algorithms for secure communication. These algorithms enable us to exploit parallelism and scalable multiplexing schemes to build photonic network architectures. A number of very high-speed switching and routing architectures and their relationships with very high performance processor architectures were studied. Indications are that routers for very high speed photonic networks can be designed using the very robust and distributed TCP/IP protocol, if suitable processor architecture support is available.

  14. Overview of NASA Multi-dimensional Stirling Convertor Code Development and Validation Effort

    NASA Technical Reports Server (NTRS)

    Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David

    2002-01-01

    A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and "two space" test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow fig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this

  15. National Combustion Code Parallel Performance Enhancements

    NASA Technical Reports Server (NTRS)

    Quealy, Angela; Benyo, Theresa (Technical Monitor)

    2002-01-01

    The National Combustion Code (NCC) is being developed by an industry-government team for the design and analysis of combustion systems. The unstructured grid, reacting flow code uses a distributed memory, message passing model for its parallel implementation. The focus of the present effort has been to improve the performance of the NCC code to meet combustor designer requirements for model accuracy and analysis turnaround time. Improving the performance of this code contributes significantly to the overall reduction in time and cost of the combustor design cycle. This report describes recent parallel processing modifications to NCC that have improved the parallel scalability of the code, enabling a two hour turnaround for a 1.3 million element fully reacting combustion simulation on an SGI Origin 2000.

  16. Monte Carlo track structure for radiation biology and space applications

    NASA Technical Reports Server (NTRS)

    Nikjoo, H.; Uehara, S.; Khvostunov, I. G.; Cucinotta, F. A.; Wilson, W. E.; Goodhead, D. T.

    2001-01-01

    Over the past two decades event by event Monte Carlo track structure codes have increasingly been used for biophysical modelling and radiotherapy. Advent of these codes has helped to shed light on many aspects of microdosimetry and mechanism of damage by ionising radiation in the cell. These codes have continuously been modified to include new improved cross sections and computational techniques. This paper provides a summary of input data for ionizations, excitations and elastic scattering cross sections for event by event Monte Carlo track structure simulations for electrons and ions in the form of parametric equations, which makes it easy to reproduce the data. Stopping power and radial distribution of dose are presented for ions and compared with experimental data. A model is described for simulation of full slowing down of proton tracks in water in the range 1 keV to 1 MeV. Modelling and calculations are presented for the response of a TEPC proportional counter irradiated with 5 MeV alpha-particles. Distributions are presented for the wall and wall-less counters. Data shows contribution of indirect effects to the lineal energy distribution for the wall counters responses even at such a low ion energy.

  17. Space Station Freedom electrical performance model

    NASA Technical Reports Server (NTRS)

    Hojnicki, Jeffrey S.; Green, Robert D.; Kerslake, Thomas W.; Mckissock, David B.; Trudell, Jeffrey J.

    1993-01-01

    The baseline Space Station Freedom electric power system (EPS) employs photovoltaic (PV) arrays and nickel hydrogen (NiH2) batteries to supply power to housekeeping and user electrical loads via a direct current (dc) distribution system. The EPS was originally designed for an operating life of 30 years through orbital replacement of components. As the design and development of the EPS continues, accurate EPS performance predictions are needed to assess design options, operating scenarios, and resource allocations. To meet these needs, NASA Lewis Research Center (LeRC) has, over a 10 year period, developed SPACE (Station Power Analysis for Capability Evaluation), a computer code designed to predict EPS performance. This paper describes SPACE, its functionality, and its capabilities.

  18. Benchmark Analysis of Pion Contribution from Galactic Cosmic Rays

    NASA Technical Reports Server (NTRS)

    Aghara, Sukesh K.; Blattnig, Steve R.; Norbury, John W.; Singleterry, Robert C., Jr.

    2008-01-01

    Shielding strategies for extended stays in space must include a comprehensive resolution of the secondary radiation environment inside the spacecraft induced by the primary, external radiation. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. A systematic verification and validation effort is underway for HZETRN, which is a space radiation transport code currently used by NASA. It performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. The question naturally arises as to what is the contribution of these particles to space radiation. The pion has a production kinetic energy threshold of about 280 MeV. The Galactic cosmic ray (GCR) spectra, coincidentally, reaches flux maxima in the hundreds of MeV range, corresponding to the pion production threshold. We present results from the Monte Carlo code MCNPX, showing the effect of lepton and meson physics when produced and transported explicitly in a GCR environment.

  19. Validation of hydrogen gas stratification and mixing models

    DOE PAGES

    Wu, Hsingtzu; Zhao, Haihua

    2015-05-26

    Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for amore » large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.« less

  20. Analysis of electrophoresis performance

    NASA Technical Reports Server (NTRS)

    Roberts, G. O.

    1984-01-01

    The SAMPLE computer code models electrophoresis separation in a wide range of conditions. Results are included for steady three dimensional continuous flow electrophoresis (CFE), time dependent gel and acetate film experiments in one or two dimensions and isoelectric focusing in one dimension. The code evolves N two dimensional radical concentration distributions in time, or distance down a CFE chamber. For each time or distance increment, there are six stages, successively obtaining the pH distribution, the corresponding degrees of ionization for each radical, the conductivity, the electric field and current distribution, and the flux components in each direction for each separate radical. The final stage is to update the radical concentrations. The model formulation for ion motion in an electric field ignores activity effects, and is valid only for low concentrations; for larger concentrations the conductivity is, therefore, also invalid.

  1. Using the FLUKA Monte Carlo Code to Simulate the Interactions of Ionizing Radiation with Matter to Assist and Aid Our Understanding of Ground Based Accelerator Testing, Space Hardware Design, and Secondary Space Radiation Environments

    NASA Technical Reports Server (NTRS)

    Reddell, Brandon

    2015-01-01

    Designing hardware to operate in the space radiation environment is a very difficult and costly activity. Ground based particle accelerators can be used to test for exposure to the radiation environment, one species at a time, however, the actual space environment cannot be duplicated because of the range of energies and isotropic nature of space radiation. The FLUKA Monte Carlo code is an integrated physics package based at CERN that has been under development for the last 40+ years and includes the most up-to-date fundamental physics theory and particle physics data. This work presents an overview of FLUKA and how it has been used in conjunction with ground based radiation testing for NASA and improve our understanding of secondary particle environments resulting from the interaction of space radiation with matter.

  2. Pseudo-Newtonian Equations for Evolution of Particles and Fluids in Stationary Space-times

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witzany, Vojtěch; Lämmerzahl, Claus, E-mail: vojtech.witzany@zarm.uni-bremen.de, E-mail: claus.laemmerzahl@zarm.uni-bremen.de

    Pseudo-Newtonian potentials are a tool often used in theoretical astrophysics to capture some key features of a black hole space-time in a Newtonian framework. As a result, one can use Newtonian numerical codes, and Newtonian formalism, in general, in an effective description of important astrophysical processes such as accretion onto black holes. In this paper, we develop a general pseudo-Newtonian formalism, which pertains to the motion of particles, light, and fluids in stationary space-times. In return, we are able to assess the applicability of the pseudo-Newtonian scheme. The simplest and most elegant formulas are obtained in space-times without gravitomagnetic effects,more » such as the Schwarzschild rather than the Kerr space-time; the quantitative errors are smallest for motion with low binding energy. Included is a ready-to-use set of fluid equations in Schwarzschild space-time in Cartesian and radial coordinates.« less

  3. Preprocessing for Eddy Dissipation Rate and TKE Profile Generation

    NASA Technical Reports Server (NTRS)

    Zak, J. Allen; Rodgers, William G., Jr.; McKissick, Burnell T. (Technical Monitor)

    2001-01-01

    The Aircraft Vortex Spacing System (AVOSS), a set of algorithms to determine aircraft spacing according to wake vortex behavior prediction, requires turbulence profiles to appropriately determine arrival and departure aircraft spacing. The ambient atmospheric turbulence profile must always be produced, even if the result is an arbitrary (canned) profile. The original turbulence profile code was generated By North Carolina State University and used in a non-real-time environment in the past. All the input parameters could be carefully selected and screened prior to input. Since this code must run in real-time using actual measurements in the field as input, it became imperative to begin a data checking and screening process as part of the real-time implementation. The process described herein is a step towards ensuring that the best possible turbulence profile is always provided to AVOSS. Data fill-ins, constant profiles and arbitrary profiles are used only as a last resort, but are essential to ensure uninterrupted application of AVOSS.

  4. Approach to Integrate Global-Sun Models of Magnetic Flux Emergence and Transport for Space Weather Studies

    NASA Technical Reports Server (NTRS)

    Mansour, Nagi N.; Wray, Alan A.; Mehrotra, Piyush; Henney, Carl; Arge, Nick; Godinez, H.; Manchester, Ward; Koller, J.; Kosovichev, A.; Scherrer, P.; hide

    2013-01-01

    The Sun lies at the center of space weather and is the source of its variability. The primary input to coronal and solar wind models is the activity of the magnetic field in the solar photosphere. Recent advancements in solar observations and numerical simulations provide a basis for developing physics-based models for the dynamics of the magnetic field from the deep convection zone of the Sun to the corona with the goal of providing robust near real-time boundary conditions at the base of space weather forecast models. The goal is to develop new strategic capabilities that enable characterization and prediction of the magnetic field structure and flow dynamics of the Sun by assimilating data from helioseismology and magnetic field observations into physics-based realistic magnetohydrodynamics (MHD) simulations. The integration of first-principle modeling of solar magnetism and flow dynamics with real-time observational data via advanced data assimilation methods is a new, transformative step in space weather research and prediction. This approach will substantially enhance an existing model of magnetic flux distribution and transport developed by the Air Force Research Lab. The development plan is to use the Space Weather Modeling Framework (SWMF) to develop Coupled Models for Emerging flux Simulations (CMES) that couples three existing models: (1) an MHD formulation with the anelastic approximation to simulate the deep convection zone (FSAM code), (2) an MHD formulation with full compressible Navier-Stokes equations and a detailed description of radiative transfer and thermodynamics to simulate near-surface convection and the photosphere (Stagger code), and (3) an MHD formulation with full, compressible Navier-Stokes equations and an approximate description of radiative transfer and heating to simulate the corona (Module in BATS-R-US). CMES will enable simulations of the emergence of magnetic structures from the deep convection zone to the corona. Finally, a plan will be summarized on the development of a Flux Emergence Prediction Tool (FEPT) in which helioseismology-derived data and vector magnetic maps are assimilated into CMES that couples the dynamics of magnetic flux from the deep interior to the corona.

  5. Aozan: an automated post-sequencing data-processing pipeline.

    PubMed

    Perrin, Sandrine; Firmo, Cyril; Lemoine, Sophie; Le Crom, Stéphane; Jourdren, Laurent

    2017-07-15

    Data management and quality control of output from Illumina sequencers is a disk space- and time-consuming task. Thus, we developed Aozan to automatically handle data transfer, demultiplexing, conversion and quality control once a run has finished. This software greatly improves run data management and the monitoring of run statistics via automatic emails and HTML web reports. Aozan is implemented in Java and Python, supported on Linux systems, and distributed under the GPLv3 License at: http://www.outils.genomique.biologie.ens.fr/aozan/ . Aozan source code is available on GitHub: https://github.com/GenomicParisCentre/aozan . aozan@biologie.ens.fr. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  6. THERMINATOR: THERMal heavy-IoN generATOR

    NASA Astrophysics Data System (ADS)

    Kisiel, Adam; Tałuć, Tomasz; Broniowski, Wojciech; Florkowski, Wojciech

    2006-04-01

    THERMINATOR is a Monte Carlo event generator designed for studying of particle production in relativistic heavy-ion collisions performed at such experimental facilities as the SPS, RHIC, or LHC. The program implements thermal models of particle production with single freeze-out. It performs the following tasks: (1) generation of stable particles and unstable resonances at the chosen freeze-out hypersurface with the local phase-space density of particles given by the statistical distribution factors, (2) subsequent space-time evolution and decays of hadronic resonances in cascades, (3) calculation of the transverse-momentum spectra and numerous other observables related to the space-time evolution. The geometry of the freeze-out hypersurface and the collective velocity of expansion may be chosen from two successful models, the Cracow single-freeze-out model and the Blast-Wave model. All particles from the Particle Data Tables are used. The code is written in the object-oriented c++ language and complies to the standards of the ROOT environment. Program summaryProgram title:THERMINATOR Catalogue identifier:ADXL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXL_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland RAM required to execute with typical data:50 Mbytes Number of processors used:1 Computer(s) for which the program has been designed: PC, Pentium III, IV, or Athlon, 512 MB RAM not hardware dependent (any computer with the c++ compiler and the ROOT environment [R. Brun, F. Rademakers, Nucl. Instrum. Methods A 389 (1997) 81, http://root.cern.ch] Operating system(s) for which the program has been designed:Linux: Mandrake 9.0, Debian 3.0, SuSE 9.0, Red Hat FEDORA 3, etc., Windows XP with Cygwin ver. 1.5.13-1 and gcc ver. 3.3.3 (cygwin special)—not system dependent External routines/libraries used: ROOT ver. 4.02.00 Programming language:c++ Size of the package: (324 KB directory 40 KB compressed distribution archive), without the ROOT libraries (see http://root.cern.ch for details on the ROOT [R. Brun, F. Rademakers, Nucl. Instrum. Methods A 389 (1997) 81, http://root.cern.ch] requirements). The output files created by the code need 1.1 GB for each 500 events. Distribution format: tar gzip file Number of lines in distributed program, including test data, etc.: 6534 Number of bytes in ditribution program, including test data, etc.:41 828 Nature of the physical problem: Statistical models have proved to be very useful in the description of soft physics in relativistic heavy-ion collisions [P. Braun-Munzinger, K. Redlich, J. Stachel, 2003, nucl-th/0304013. [2

  7. Link monitor and control operator assistant: A prototype demonstrating semiautomated monitor and control

    NASA Technical Reports Server (NTRS)

    Lee, L. F.; Cooper, L. P.

    1993-01-01

    This article describes the approach, results, and lessons learned from an applied research project demonstrating how artificial intelligence (AI) technology can be used to improve Deep Space Network operations. Configuring antenna and associated equipment necessary to support a communications link is a time-consuming process. The time spent configuring the equipment is essentially overhead and results in reduced time for actual mission support operations. The NASA Office of Space Communications (Code O) and the NASA Office of Advanced Concepts and Technology (Code C) jointly funded an applied research project to investigate technologies which can be used to reduce configuration time. This resulted in the development and application of AI-based automated operations technology in a prototype system, the Link Monitor and Control Operator Assistant (LMC OA). The LMC OA was tested over the course of three months in a parallel experimental mode on very long baseline interferometry (VLBI) operations at the Goldstone Deep Space Communications Center. The tests demonstrated a 44 percent reduction in pre-calibration time for a VLBI pass on the 70-m antenna. Currently, this technology is being developed further under Research and Technology Operating Plan (RTOP)-72 to demonstrate the applicability of the technology to operations in the entire Deep Space Network.

  8. The MATROSHKA experiment: results and comparison from extravehicular activity (MTR-1) and intravehicular activity (MTR-2A/2B) exposure.

    PubMed

    Berger, Thomas; Bilski, Paweł; Hajek, Michael; Puchalska, Monika; Reitz, Günther

    2013-12-01

    Astronauts working and living in space are exposed to considerably higher doses and different qualities of ionizing radiation than people on Earth. The multilateral MATROSHKA (MTR) experiment, coordinated by the German Aerospace Center, represents the most comprehensive effort to date in radiation protection dosimetry in space using an anthropomorphic upper-torso phantom used for radiotherapy treatment planning. The anthropomorphic upper-torso phantom maps the radiation distribution as a simulated human body installed outside (MTR-1) and inside different compartments (MTR-2A: Pirs; MTR-2B: Zvezda) of the Russian Segment of the International Space Station. Thermoluminescence dosimeters arranged in a 2.54 cm orthogonal grid, at the site of vital organs and on the surface of the phantom allow for visualization of the absorbed dose distribution with superior spatial resolution. These results should help improve the estimation of radiation risks for long-term human space exploration and support benchmarking of radiation transport codes.

  9. Creation of a United States Emergency Medical Services Administration Within the Department of Homeland Security

    DTIC Science & Technology

    2012-03-01

    Little Philip and Ting-Ting, all my love. Thank you and God Bless ! 1 I. INTRODUCTION Emergency medical services personnel are critical resources ...equipment in times of duress. Resources must be available to distribute and utilize in times of need. FICEMS and the Office of Emergency Medical...DISTRIBUTION / AVAILABILITY STATEMENT Approved for public release; distribution is unlimited 12b. DISTRIBUTION CODE A 13. ABSTRACT (maximum 200 words

  10. A Unified Mathematical Framework for Coding Time, Space, and Sequences in the Hippocampal Region

    PubMed Central

    MacDonald, Christopher J.; Tiganj, Zoran; Shankar, Karthik H.; Du, Qian; Hasselmo, Michael E.; Eichenbaum, Howard

    2014-01-01

    The medial temporal lobe (MTL) is believed to support episodic memory, vivid recollection of a specific event situated in a particular place at a particular time. There is ample neurophysiological evidence that the MTL computes location in allocentric space and more recent evidence that the MTL also codes for time. Space and time represent a similar computational challenge; both are variables that cannot be simply calculated from the immediately available sensory information. We introduce a simple mathematical framework that computes functions of both spatial location and time as special cases of a more general computation. In this framework, experience unfolding in time is encoded via a set of leaky integrators. These leaky integrators encode the Laplace transform of their input. The information contained in the transform can be recovered using an approximation to the inverse Laplace transform. In the temporal domain, the resulting representation reconstructs the temporal history. By integrating movements, the equations give rise to a representation of the path taken to arrive at the present location. By modulating the transform with information about allocentric velocity, the equations code for position of a landmark. Simulated cells show a close correspondence to neurons observed in various regions for all three cases. In the temporal domain, novel secondary analyses of hippocampal time cells verified several qualitative predictions of the model. An integrated representation of spatiotemporal context can be computed by taking conjunctions of these elemental inputs, leading to a correspondence with conjunctive neural representations observed in dorsal CA1. PMID:24672015

  11. Fluid Distribution for In-space Cryogenic Propulsion

    NASA Technical Reports Server (NTRS)

    Lear, William

    2005-01-01

    The ultimate goal of this task is to enable the use of a single supply of cryogenic propellants for three distinct spacecraft propulsion missions: main propulsion, orbital maneuvering, and attitude control. A fluid distribution system is sought which allows large propellant flows during the first two missions while still allowing control of small propellant flows during attitude control. Existing research has identified the probable benefits of a combined thermal management/power/fluid distribution system based on the Solar Integrated Thermal Management and Power (SITMAP) cycle. Both a numerical model and an experimental model are constructed in order to predict the performance of such an integrated thermal management/propulsion system. This research task provides a numerical model and an experimental apparatus which will simulate an integrated thermal/power/fluid management system based on the SITMAP cycle, and assess its feasibility for various space missions. Various modifications are done to the cycle, such as the addition of a regeneration process that allows heat to be transferred into the working fluid prior to the solar collector, thereby reducing the collector size and weight. Fabri choking analysis was also accounted for. Finally the cycle is to be optimized for various space missions based on a mass based figure of merit, namely the System Mass Ratio (SMR). -. 1 he theoretical and experimental results from these models are be used to develop a design code (JETSIT code) which is able to provide design parameters for such a system, over a range of cooling loads, power generation, and attitude control thrust levels. The performance gains and mass savings will be compared to those of existing spacecraft systems.

  12. Rates of profit as correlated sums of random variables

    NASA Astrophysics Data System (ADS)

    Greenblatt, R. E.

    2013-10-01

    Profit realization is the dominant feature of market-based economic systems, determining their dynamics to a large extent. Rather than attaining an equilibrium, profit rates vary widely across firms, and the variation persists over time. Differing definitions of profit result in differing empirical distributions. To study the statistical properties of profit rates, I used data from a publicly available database for the US Economy for 2009-2010 (Risk Management Association). For each of three profit rate measures, the sample space consists of 771 points. Each point represents aggregate data from a small number of US manufacturing firms of similar size and type (NAICS code of principal product). When comparing the empirical distributions of profit rates, significant ‘heavy tails’ were observed, corresponding principally to a number of firms with larger profit rates than would be expected from simple models. An apparently novel correlated sum of random variables statistical model was used to model the data. In the case of operating and net profit rates, a number of firms show negative profits (losses), ruling out simple gamma or lognormal distributions as complete models for these data.

  13. Guidelines and Metrics for Assessing Space System Cost Estimates

    DTIC Science & Technology

    2008-01-01

    analysis time, reuse tooling, models , mechanical ground-support equipment [MGSE]) High mass margin ( simplifying assumptions used to bound solution...engineering environment changes High reuse of architecture, design , tools, code, test scripts, and commercial real- time operating systems Simplified life...Coronal Explorer TWTA traveling wave tube amplifier USAF U.S. Air Force USCM Unmanned Space Vehicle Cost Model USN U.S. Navy UV ultraviolet UVOT UV

  14. Experimental characterization of an ultra-fast Thomson scattering x-ray source with three-dimensional time and frequency-domain analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuba, J; Slaughter, D R; Fittinghoff, D N

    We present a detailed comparison of the measured characteristics of Thomson backscattered x-rays produced at the PLEIADES (Picosecond Laser-Electron Interaction for the Dynamic Evaluation of Structures) facility at Lawrence Livermore National Laboratory to predicted results from a newly developed, fully three-dimensional time and frequency-domain code. Based on the relativistic differential cross section, this code has the capability to calculate time and space dependent spectra of the x-ray photons produced from linear Thomson scattering for both bandwidth-limited and chirped incident laser pulses. Spectral broadening of the scattered x-ray pulse resulting from the incident laser bandwidth, perpendicular wave vector components in themore » laser focus, and the transverse and longitudinal phase space of the electron beam are included. Electron beam energy, energy spread, and transverse phase space measurements of the electron beam at the interaction point are presented, and the corresponding predicted x-ray characteristics are determined. In addition, time-integrated measurements of the x-rays produced from the interaction are presented, and shown to agree well with the simulations.« less

  15. Solid State Light Evaluation in the U.S. Lab Mockup

    NASA Technical Reports Server (NTRS)

    Maida, James c.; Bowen, Charles K.; Wheelwright, Chuck

    2009-01-01

    This document constitutes the publication of work performed by the Space Human Factors Laboratory (mail code SF5 at the time) at the Johnson Space Center (JSC) in the months of June and July of 2000. At that time, the Space Human Factors Laboratory was part of the Space Human Factors Branch in the Flight Projects Division of the Space and Life Directorate. This report was originally to be a document for internal consumption only at JSC as it was seen to be only preliminary work for the further development of solid state illumination for general lighting on future space vehicles and the International Space Station (ISS). Due to funding constraints, immediate follow-on efforts were delayed and the need for publication of this document was overcome by other events. However, in recent years and with the development and deployment of a solid state light luminaire prototype on ISS, the time was overdue for publishing this information for general distribution and reference. Solid state lights (SSLs) are being developed to potentially replace the general luminaire assemblies (GLAs) currently in service in the International Space Station (ISS) and included in designs of modules for the ISS. The SSLs consist of arrays of light emitting diodes (LEDs), small solid state electronic devices that produce visible light in proportion to the electrical current flowing through them. Recent progressive advances in electrical power-to-light conversion efficiency in LED technology have allowed the consideration of LEDs as replacements for incandescent and fluorescent light sources in many circumstances, and their inherent advantages in ruggedness, reliability, and life expectancy make them attractive for applications in spacecraft. One potential area of application for the SSLs in the U.S. Laboratory Module of the ISS. This study addresses the suitability of the SSLs as replacements for the GLAs in this application.

  16. Space shuttle main engine numerical modeling code modifications and analysis

    NASA Technical Reports Server (NTRS)

    Ziebarth, John P.

    1988-01-01

    The user of computational fluid dynamics (CFD) codes must be concerned with the accuracy and efficiency of the codes if they are to be used for timely design and analysis of complicated three-dimensional fluid flow configurations. A brief discussion of how accuracy and efficiency effect the CFD solution process is given. A more detailed discussion of how efficiency can be enhanced by using a few Cray Research Inc. utilities to address vectorization is presented and these utilities are applied to a three-dimensional Navier-Stokes CFD code (INS3D).

  17. NEWTPOIS- NEWTON POISSON DISTRIBUTION PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The cumulative poisson distribution program, NEWTPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714), can be used independently of one another. NEWTPOIS determines percentiles for gamma distributions with integer shape parameters and calculates percentiles for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. NEWTPOIS determines the Poisson parameter (lambda), that is; the mean (or expected) number of events occurring in a given unit of time, area, or space. Given that the user already knows the cumulative probability for a specific number of occurrences (n) it is usually a simple matter of substitution into the Poisson distribution summation to arrive at lambda. However, direct calculation of the Poisson parameter becomes difficult for small positive values of n and unmanageable for large values. NEWTPOIS uses Newton's iteration method to extract lambda from the initial value condition of the Poisson distribution where n=0, taking successive estimations until some user specified error term (epsilon) is reached. The NEWTPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting epsilon, n, and the cumulative probability of the occurrence of n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 30K. NEWTPOIS was developed in 1988.

  18. Characterising RNA secondary structure space using information entropy

    PubMed Central

    2013-01-01

    Comparative methods for RNA secondary structure prediction use evolutionary information from RNA alignments to increase prediction accuracy. The model is often described in terms of stochastic context-free grammars (SCFGs), which generate a probability distribution over secondary structures. It is, however, unclear how this probability distribution changes as a function of the input alignment. As prediction programs typically only return a single secondary structure, better characterisation of the underlying probability space of RNA secondary structures is of great interest. In this work, we show how to efficiently compute the information entropy of the probability distribution over RNA secondary structures produced for RNA alignments by a phylo-SCFG, and implement it for the PPfold model. We also discuss interpretations and applications of this quantity, including how it can clarify reasons for low prediction reliability scores. PPfold and its source code are available from http://birc.au.dk/software/ppfold/. PMID:23368905

  19. GERMcode: A Stochastic Model for Space Radiation Risk Assessment

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Ponomarev, Artem L.; Cucinotta, Francis A.

    2012-01-01

    A new computer model, the GCR Event-based Risk Model code (GERMcode), was developed to describe biophysical events from high-energy protons and high charge and energy (HZE) particles that have been studied at the NASA Space Radiation Laboratory (NSRL) for the purpose of simulating space radiation biological effects. In the GERMcode, the biophysical description of the passage of HZE particles in tissue and shielding materials is made with a stochastic approach that includes both particle track structure and nuclear interactions. The GERMcode accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections. For NSRL applications, the GERMcode evaluates a set of biophysical properties, such as the Poisson distribution of particles or delta-ray hits for a given cellular area and particle dose, the radial dose on tissue, and the frequency distribution of energy deposition in a DNA volume. By utilizing the ProE/Fishbowl ray-tracing analysis, the GERMcode will be used as a bi-directional radiation transport model for future spacecraft shielding analysis in support of Mars mission risk assessments. Recent radiobiological experiments suggest the need for new approaches to risk assessment that include time-dependent biological events due to the signaling times for activation and relaxation of biological processes in cells and tissue. Thus, the tracking of the temporal and spatial distribution of events in tissue is a major goal of the GERMcode in support of the simulation of biological processes important in GCR risk assessments. In order to validate our approach, basic radiobiological responses such as cell survival curves, mutation, chromosomal aberrations, and representative mouse tumor induction curves are implemented into the GERMcode. Extension of these descriptions to other endpoints related to non-targeted effects and biochemical pathway responses will be discussed.

  20. Phase space simulation of collisionless stellar systems on the massively parallel processor

    NASA Technical Reports Server (NTRS)

    White, Richard L.

    1987-01-01

    A numerical technique for solving the collisionless Boltzmann equation describing the time evolution of a self gravitating fluid in phase space was implemented on the Massively Parallel Processor (MPP). The code performs calculations for a two dimensional phase space grid (with one space and one velocity dimension). Some results from calculations are presented. The execution speed of the code is comparable to the speed of a single processor of a Cray-XMP. Advantages and disadvantages of the MPP architecture for this type of problem are discussed. The nearest neighbor connectivity of the MPP array does not pose a significant obstacle. Future MPP-like machines should have much more local memory and easier access to staging memory and disks in order to be effective for this type of problem.

  1. Numerical solution of Space Shuttle Orbiter flow field including real gas effects

    NASA Technical Reports Server (NTRS)

    Prabhu, D. K.; Tannehill, J. C.

    1984-01-01

    The hypersonic, laminar flow around the Space Shuttle Orbiter has been computed for both an ideal gas (gamma = 1.2) and equilibrium air using a real-gas, parabolized Navier-Stokes code. This code employs a generalized coordinate transformation; hence, it places no restrictions on the orientation of the solution surfaces. The initial solution in the nose region was computed using a 3-D, real-gas, time-dependent Navier-Stokes code. The thermodynamic and transport properties of equilibrium air were obtained from either approximate curve fits or a table look-up procedure. Numerical results are presented for flight conditions corresponding to the STS-3 trajectory. The computed surface pressures and convective heating rates are compared with data from the STS-3 flight.

  2. Real-time and imaginary-time quantum hierarchal Fokker-Planck equations

    NASA Astrophysics Data System (ADS)

    Tanimura, Yoshitaka

    2015-04-01

    We consider a quantum mechanical system represented in phase space (referred to hereafter as "Wigner space"), coupled to a harmonic oscillator bath. We derive quantum hierarchal Fokker-Planck (QHFP) equations not only in real time but also in imaginary time, which represents an inverse temperature. This is an extension of a previous work, in which we studied a spin-boson system, to a Brownian system. It is shown that the QHFP in real time obtained from a correlated thermal equilibrium state of the total system possesses the same form as those obtained from a factorized initial state. A modified terminator for the hierarchal equations of motion is introduced to treat the non-Markovian case more efficiently. Using the imaginary-time QHFP, numerous thermodynamic quantities, including the free energy, entropy, internal energy, heat capacity, and susceptibility, can be evaluated for any potential. These equations allow us to treat non-Markovian, non-perturbative system-bath interactions at finite temperature. Through numerical integration of the real-time QHFP for a harmonic system, we obtain the equilibrium distributions, the auto-correlation function, and the first- and second-order response functions. These results are compared with analytically exact results for the same quantities. This provides a critical test of the formalism for a non-factorized thermal state and elucidates the roles of fluctuation, dissipation, non-Markovian effects, and system-bath coherence. Employing numerical solutions of the imaginary-time QHFP, we demonstrate the capability of this method to obtain thermodynamic quantities for any potential surface. It is shown that both types of QHFP equations can produce numerical results of any desired accuracy. The FORTRAN source codes that we developed, which allow for the treatment of Wigner space dynamics with any potential form (TanimuranFP15 and ImTanimuranFP15), are provided as the supplementary material.

  3. Phase space effects on fast ion distribution function modeling in tokamaks

    NASA Astrophysics Data System (ADS)

    Podestà, M.; Gorelenkova, M.; Fredrickson, E. D.; Gorelenkov, N. N.; White, R. B.

    2016-05-01

    Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.

  4. Phase space effects on fast ion distribution function modeling in tokamaks

    DOE Data Explorer

    White, R. B. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Podesta, M. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Gorelenkova, M. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Fredrickson, E. D. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Gorelenkov, N. N. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)

    2016-06-01

    Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.

  5. A Tool for Parameter-space Explorations

    NASA Astrophysics Data System (ADS)

    Murase, Yohsuke; Uchitane, Takeshi; Ito, Nobuyasu

    A software for managing simulation jobs and results, named "OACIS", is presented. It controls a large number of simulation jobs executed in various remote servers, keeps these results in an organized way, and manages the analyses on these results. The software has a web browser front end, and users can submit various jobs to appropriate remote hosts from a web browser easily. After these jobs are finished, all the result files are automatically downloaded from the computational hosts and stored in a traceable way together with the logs of the date, host, and elapsed time of the jobs. Some visualization functions are also provided so that users can easily grasp the overview of the results distributed in a high-dimensional parameter space. Thus, OACIS is especially beneficial for the complex simulation models having many parameters for which a lot of parameter searches are required. By using API of OACIS, it is easy to write a code that automates parameter selection depending on the previous simulation results. A few examples of the automated parameter selection are also demonstrated.

  6. Performance and Application of Parallel OVERFLOW Codes on Distributed and Shared Memory Platforms

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Rizk, Yehia M.

    1999-01-01

    The presentation discusses recent studies on the performance of the two parallel versions of the aerodynamics CFD code, OVERFLOW_MPI and _MLP. Developed at NASA Ames, the serial version, OVERFLOW, is a multidimensional Navier-Stokes flow solver based on overset (Chimera) grid technology. The code has recently been parallelized in two ways. One is based on the explicit message-passing interface (MPI) across processors and uses the _MPI communication package. This approach is primarily suited for distributed memory systems and workstation clusters. The second, termed the multi-level parallel (MLP) method, is simple and uses shared memory for all communications. The _MLP code is suitable on distributed-shared memory systems. For both methods, the message passing takes place across the processors or processes at the advancement of each time step. This procedure is, in effect, the Chimera boundary conditions update, which is done in an explicit "Jacobi" style. In contrast, the update in the serial code is done in more of the "Gauss-Sidel" fashion. The programming efforts for the _MPI code is more complicated than for the _MLP code; the former requires modification of the outer and some inner shells of the serial code, whereas the latter focuses only on the outer shell of the code. The _MPI version offers a great deal of flexibility in distributing grid zones across a specified number of processors in order to achieve load balancing. The approach is capable of partitioning zones across multiple processors or sending each zone and/or cluster of several zones into a single processor. The message passing across the processors consists of Chimera boundary and/or an overlap of "halo" boundary points for each partitioned zone. The MLP version is a new coarse-grain parallel concept at the zonal and intra-zonal levels. A grouping strategy is used to distribute zones into several groups forming sub-processes which will run in parallel. The total volume of grid points in each group are approximately balanced. A proper number of threads are initially allocated to each group, and in subsequent iterations during the run-time, the number of threads are adjusted to achieve load balancing across the processes. Each process exploits the multitasking directives already established in Overflow.

  7. PMARC_12 - PANEL METHOD AMES RESEARCH CENTER, VERSION 12

    NASA Technical Reports Server (NTRS)

    Ashby, D. L.

    1994-01-01

    Panel method computer programs are software tools of moderate cost used for solving a wide range of engineering problems. The panel code PMARC_12 (Panel Method Ames Research Center, version 12) can compute the potential flow field around complex three-dimensional bodies such as complete aircraft models. PMARC_12 is a well-documented, highly structured code with an open architecture that facilitates modifications and the addition of new features. Adjustable arrays are used throughout the code, with dimensioning controlled by a set of parameter statements contained in an include file; thus, the size of the code (i.e. the number of panels that it can handle) can be changed very quickly. This allows the user to tailor PMARC_12 to specific problems and computer hardware constraints. In addition, PMARC_12 can be configured (through one of the parameter statements in the include file) so that the code's iterative matrix solver is run entirely in RAM, rather than reading a large matrix from disk at each iteration. This significantly increases the execution speed of the code, but it requires a large amount of RAM memory. PMARC_12 contains several advanced features, including internal flow modeling, a time-stepping wake model for simulating either steady or unsteady (including oscillatory) motions, a Trefftz plane induced drag computation, off-body and on-body streamline computations, and computation of boundary layer parameters using a two-dimensional integral boundary layer method along surface streamlines. In a panel method, the surface of the body over which the flow field is to be computed is represented by a set of panels. Singularities are distributed on the panels to perturb the flow field around the body surfaces. PMARC_12 uses constant strength source and doublet distributions over each panel, thus making it a low order panel method. Higher order panel methods allow the singularity strength to vary linearly or quadratically across each panel. Experience has shown that low order panel methods can provide nearly the same accuracy as higher order methods over a wide range of cases with significantly reduced computation times; hence, the low order formulation was adopted for PMARC_12. The flow problem is solved by modeling the body as a closed surface dividing space into two regions: the region external to the surface in which an unknown velocity potential exists representing the flow field of interest, and the region internal to the surface in which a known velocity potential (representing a fictitious flow) is prescribed as a boundary condition. Both velocity potentials are required to satisfy Laplace's equation. A surface integral equation for the unknown potential external to the surface can be written by applying Green's Theorem to the external region. Using the internal potential and zero flow through the surface as boundary conditions, the unknown potential external to the surface can be solved for. When the internal flow option, which allows the analysis of closed ducts, wind tunnels, and similar internal flow problems, is selected, the geometry is modeled such that the flow field of interest is inside the geometry and the fictitious flow is outside the geometry. Items such as wings, struts, or aircraft models can be included in the internal flow problem. The time-stepping wake model gives PMARC_12 the ability to model both steady and unsteady flow problems. The wake is convected downstream from the wake-separation line by the local velocity field. With each time step, a new row of wake panels is added to the wake at the wake-separation line. Time stepping can start from time t=0 (no initial wake) or from time t=t0 (an initial wake is specified). A wide range of motions can be prescribed, including constant rates of translation, constant rate of rotation about an arbitrary axis, oscillatory translation, and oscillatory rotation about any of the three coordinate axes. Investigators interested in a visual representation of the phenomenon they are studying with PMARC_12 may want to consider obtaining the program GVS (ARC-13361), the General Visualization System. GVS is a Silicon Graphics IRIS program which was created for the purpose of supporting the scientific visualization needs of PMARC_12. GVS is available separately from COSMIC. PMARC_12 is written in standard FORTRAN 77, with the exception of the NAMELIST extension used for input. This makes the code fairly machine independent. A compiler which supports the NAMELIST extension is required. The amount of free disk space and RAM memory required for PMARC_12 will vary depending on how the code is dimensioned using the parameter statements in the include file. The recommended minimum requirements are 20Mb of free disk space and 4Mb of RAM. PMARC_12 has been successfully implemented on a Macintosh II running System 6.0.7 or 7.0 (using MPW/Language Systems Fortran 3.0), a Sun SLC running SunOS 4.1.1, an HP 720 running HP-UX 8.07, an SGI IRIS running IRIX 4.0 (it will not run under IRIX 3.x.x without modifications), an IBM RS/6000 running AIX, a DECstation 3100 running ULTRIX, and a CRAY-YMP running UNICOS 6.0 or later. Due to its memory requirements, this program does not readily lend itself to implementation on MS-DOS based machines. The standard distribution medium for PMARC_12 is a set of three 3.5 inch 800K Macintosh format diskettes and one 3.5 inch 1.44Mb Macintosh format diskette which contains an electronic copy of the documentation in MS Word 5.0 format for the Macintosh. Alternate distribution media and formats are available upon request, but these will not include the electronic version of the document. No executables are included on the distribution media. This program is an update to PMARC version 11, which was released in 1989. PMARC_12 was released in 1993. It is available only for use by United States citizens.

  8. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  9. Analysis of electrophoresis performance

    NASA Technical Reports Server (NTRS)

    Roberts, Glyn O.

    1988-01-01

    A flexible efficient computer code is being developed to simulate electrophoretic separation phenomena, in either a cylindrical or a rectangular geometry. The code will computer the evolution in time of the concentrations of an arbitrary number of chemical species, and of the temperature, pH distribution, conductivity, electric field, and fluid motion. Use of nonuniform meshes and fast accurate implicit time-stepping will yield accurate answers at economical cost.

  10. Overview of the Graphical User Interface for the GERM Code (GCR Event-Based Risk Model

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee; Cucinotta, Francis A.

    2010-01-01

    The descriptions of biophysical events from heavy ions are of interest in radiobiology, cancer therapy, and space exploration. The biophysical description of the passage of heavy ions in tissue and shielding materials is best described by a stochastic approach that includes both ion track structure and nuclear interactions. A new computer model called the GCR Event-based Risk Model (GERM) code was developed for the description of biophysical events from heavy ion beams at the NASA Space Radiation Laboratory (NSRL). The GERM code calculates basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at NSRL for the purpose of simulating space radiobiological effects. For mono-energetic beams, the code evaluates the linear-energy transfer (LET), range (R), and absorption in tissue equivalent material for a given Charge (Z), Mass Number (A) and kinetic energy (E) of an ion. In addition, a set of biophysical properties are evaluated such as the Poisson distribution of ion or delta-ray hits for a specified cellular area, cell survival curves, and mutation and tumor probabilities. The GERM code also calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle. The contributions from primary ion and nuclear secondaries are evaluated. The GERM code accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections, and has been used by the GERM code for application to thick target experiments. The GERM code provides scientists participating in NSRL experiments with the data needed for the interpretation of their experiments, including the ability to model the beam line, the shielding of samples and sample holders, and the estimates of basic physical and biological outputs of the designed experiments. We present an overview of the GERM code GUI, as well as providing training applications.

  11. NSSDC Data listing

    NASA Technical Reports Server (NTRS)

    1981-01-01

    A convenient reference to space science and supportive data available from the National Space Science Data Center (NSSDC) is provided. Satellite data are organized by NSSDC spacecraft common name. The launch date and NSSDC ID are given. Experiments are listed alphabetically by the principal investigator or team leader. The experiment name and NSSDC ID, data set ID, data set name, data form code, quantity of data, and the time span of the data as verified by NSSDC are shown. Ground-based data, models, computer routines, and composite spacecraft data that are available from NSSDC are listed alphabetically by discipline, source, data type, data content, and data set. The data set name, data form code, quantity of data, and the time span covered where appropriate are included.

  12. Numerical Modeling of Surface Deformation due to Magma Chamber Inflation/Deflation in a Heterogeneous Viscoelastic Half-space

    NASA Astrophysics Data System (ADS)

    Dichter, M.; Roy, M.

    2015-12-01

    Interpreting surface deformation patterns in terms of deeper processes in regions of active magmatism is challenging and inherently non-unique. This study focuses on interpreting the unusual sombrero-shaped pattern of surface deformation in the Altiplano Puna region of South America, which has previously been modeled as the effect of an upwelling diapir of material in the lower crust. Our goal is to investigate other possible interpretations of the surface deformation feature using a suite of viscoelastic models with varying material heterogeneity. We use the finite-element code PyLith to study surface deformation due to a buried time-varying (periodic) overpressure source, a magma body, at depth within a viscoelastic half-space. In our models, the magma-body is a penny-shaped crack, with a cylindrical region above the crack that is weak relative to the surrounding material. We initially consider a magma body within a homogeneous viscoelastic half-space to determine the effect of the free surface upon deformation above and beneath the source region. We observe a complex depth-dependent phase relationship between stress and strain for elements that fall between the ground surface and the roof of the magma body. Next, we consider a volume of weak material (faster relaxation time relative to background) that is distributed with varying geometry around the magma body. We investigate how surface deformation is governed by the spatial distribution of the weak material and its rheologic parameters. We are able to reproduce a "sombrero" pattern of surface velocities for a range of models with material heterogeneity. The wavelength of the sombrero pattern is primarily controlled by the extent of the heterogeneous region, modulated by flexural effects. Our results also suggest an "optimum overpressure forcing frequency" where the lifetime of the sombrero pattern (a transient phenomenon due to the periodic nature of the overpressure forcing) reaches a maximum. Through further research we hope to better understand how the parameter space of our forward model controls the distribution of surface deformation and eventually develop a better understanding of the observed pattern of surface deformation in the Altiplano Puna.

  13. GRADSPMHD: A parallel MHD code based on the SPH formalism

    NASA Astrophysics Data System (ADS)

    Vanaverbeke, S.; Keppens, R.; Poedts, S.

    2014-03-01

    We present GRADSPMHD, a completely Lagrangian parallel magnetohydrodynamics code based on the SPH formalism. The implementation of the equations of SPMHD in the “GRAD-h” formalism assembles known results, including the derivation of the discretized MHD equations from a variational principle, the inclusion of time-dependent artificial viscosity, resistivity and conductivity terms, as well as the inclusion of a mixed hyperbolic/parabolic correction scheme for satisfying the ∇ṡB→ constraint on the magnetic field. The code uses a tree-based formalism for neighbor finding and can optionally use the tree code for computing the self-gravity of the plasma. The structure of the code closely follows the framework of our parallel GRADSPH FORTRAN 90 code which we added previously to the CPC program library. We demonstrate the capabilities of GRADSPMHD by running 1, 2, and 3 dimensional standard benchmark tests and we find good agreement with previous work done by other researchers. The code is also applied to the problem of simulating the magnetorotational instability in 2.5D shearing box tests as well as in global simulations of magnetized accretion disks. We find good agreement with available results on this subject in the literature. Finally, we discuss the performance of the code on a parallel supercomputer with distributed memory architecture. Catalogue identifier: AERP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERP_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 620503 No. of bytes in distributed program, including test data, etc.: 19837671 Distribution format: tar.gz Programming language: FORTRAN 90/MPI. Computer: HPC cluster. Operating system: Unix. Has the code been vectorized or parallelized?: Yes, parallelized using MPI. RAM: ˜30 MB for a Sedov test including 15625 particles on a single CPU. Classification: 12. Nature of problem: Evolution of a plasma in the ideal MHD approximation. Solution method: The equations of magnetohydrodynamics are solved using the SPH method. Running time: The test provided takes approximately 20 min using 4 processors.

  14. Coastal Benthic Boundary Layer Special Research Program: A Review of the First Year. Volume 1.

    DTIC Science & Technology

    1994-04-06

    also Indebted to Dr. LeBlanc for his supervision of the relaxation time numerical analyses, and Lachlan Munro, a3 graduate ONR AASERT student, for coding...R. Smith and E. Besancon Code 7174 Naval Research Laboratory Stennis Space Center, MS 39529-5004 I INTRODUCTION: This brief report outlines the

  15. The Conversion of Found Space for Educational Use.

    ERIC Educational Resources Information Center

    Meier, James Paul

    This study explores some experiences in recycling buildings for schools and suggests a background to use in planning and evaluating this approach to school space acquisition. Such factors as educational program, physical environment, building codes, cost and financing, legal issues, administrative processes and time, and political and social…

  16. HEATPLOT: a temperature distribution plotting program for heating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.C.; Turner, W.D.

    1977-07-01

    HEATPLOT is a temperature distribution plotting program that may be used with HEATING5, a generalized heat conduction code. HEATPLOT is capable of drawing temperature contours (isotherms), temperature-time profiles, and temperature-distance profiles from the current HEATING5 temperature distribution or from temperature changes relative to the initial temperature distribution. Contour plots may be made for two- or three-dimensional models. Temperature-time profiles and temperature-distance profiles may be made for one-, two-, and three-dimensional models. HEATPLOT is an IBM 360/370 computer code which uses the DISSPLA plotting package. Plots may be created on the CALCOMP pen-and-ink, and CALCOMP cathode ray tube (CRT), or themore » EAI pen-and-ink plotters. Printer plots may be produced or a compressed data set that may be routed to any of the available plotters may be made.« less

  17. The solar ionisation rate deduced from Ulysses measurements and its implications to interplanetary Lyman alpha-intensity

    NASA Technical Reports Server (NTRS)

    Summanen, T.; Kyroelae, E.

    1995-01-01

    We have developed a computer code which can be used to study 3-dimensional and time-dependent effects of the solar cycle on the interplanetary (IP) hydrogen distribution. The code is based on the inverted Monte Carlo simulation. In this work we have modelled the temporal behaviour of the solar ionisation rate. We have assumed that during the most of the time of the solar cycle there is an anisotopic latitudinal structure but right at the solar maximum the anisotropy disappears. The effects of this behaviour will be discussed both in regard to the IP hydrogen distribution and IP Lyman a a-intensity.

  18. TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR.

    PubMed

    Kurosawa, Masahiko

    2005-01-01

    For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54Mn and 60Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data.

  19. Terminal Homing for Autonomous Underwater Vehicle Docking

    DTIC Science & Technology

    2016-06-01

    underwater domain, accurate navigation. Above the water, light and electromagnetic signals travel well through air and space, mediums that allow for a...DISTRIBUTION CODE 13. ABSTRACT The use of docking stations for autonomous underwater vehicles (AUV) provides the ability to keep a vehicle on...Mechanical and Aerospace Engineering iv THIS PAGE INTENTIONALLY LEFT BLANK v ABSTRACT The use of docking stations for autonomous underwater

  20. Haloes gone MAD: The Halo-Finder Comparison Project

    NASA Astrophysics Data System (ADS)

    Knebe, Alexander; Knollmann, Steffen R.; Muldrew, Stuart I.; Pearce, Frazer R.; Aragon-Calvo, Miguel Angel; Ascasibar, Yago; Behroozi, Peter S.; Ceverino, Daniel; Colombi, Stephane; Diemand, Juerg; Dolag, Klaus; Falck, Bridget L.; Fasel, Patricia; Gardner, Jeff; Gottlöber, Stefan; Hsu, Chung-Hsing; Iannuzzi, Francesca; Klypin, Anatoly; Lukić, Zarija; Maciejewski, Michal; McBride, Cameron; Neyrinck, Mark C.; Planelles, Susana; Potter, Doug; Quilis, Vicent; Rasera, Yann; Read, Justin I.; Ricker, Paul M.; Roy, Fabrice; Springel, Volker; Stadel, Joachim; Stinson, Greg; Sutter, P. M.; Turchaninov, Victor; Tweed, Dylan; Yepes, Gustavo; Zemp, Marcel

    2011-08-01

    We present a detailed comparison of fundamental dark matter halo properties retrieved by a substantial number of different halo finders. These codes span a wide range of techniques including friends-of-friends, spherical-overdensity and phase-space-based algorithms. We further introduce a robust (and publicly available) suite of test scenarios that allow halo finder developers to compare the performance of their codes against those presented here. This set includes mock haloes containing various levels and distributions of substructure at a range of resolutions as well as a cosmological simulation of the large-scale structure of the universe. All the halo-finding codes tested could successfully recover the spatial location of our mock haloes. They further returned lists of particles (potentially) belonging to the object that led to coinciding values for the maximum of the circular velocity profile and the radius where it is reached. All the finders based in configuration space struggled to recover substructure that was located close to the centre of the host halo, and the radial dependence of the mass recovered varies from finder to finder. Those finders based in phase space could resolve central substructure although they found difficulties in accurately recovering its properties. Through a resolution study we found that most of the finders could not reliably recover substructure containing fewer than 30-40 particles. However, also here the phase-space finders excelled by resolving substructure down to 10-20 particles. By comparing the halo finders using a high-resolution cosmological volume, we found that they agree remarkably well on fundamental properties of astrophysical significance (e.g. mass, position, velocity and peak of the rotation curve). We further suggest to utilize the peak of the rotation curve, vmax, as a proxy for mass, given the arbitrariness in defining a proper halo edge. Airport code for Madrid, Spain

  1. Focusing Intense Charged Particle Beams with Achromatic Effects for Heavy Ion Fusion

    NASA Astrophysics Data System (ADS)

    Mitrani, James; Kaganovich, Igor

    2012-10-01

    Final focusing systems designed to minimize the effects of chromatic aberrations in the Neutralized Drift Compression Experiment (NDCX-II) are described. NDCX-II is a linear induction accelerator, designed to accelerate short bunches at high current. Previous experiments showed that neutralized drift compression significantly compresses the beam longitudinally (˜60x) in the z-direction, resulting in a narrow distribution in z-space, but a wide distribution in pz-space. Using simple lenses (e.g., solenoids, quadrupoles) to focus beam bunches with wide distributions in pz-space results in chromatic aberrations, leading to lower beam intensities (J/cm^2). Therefore, the final focusing system must be designed to compensate for chromatic aberrations. The paraxial ray equations and beam envelope equations are numerically solved for parameters appropriate to NDCX-II. Based on these results, conceptual designs for final focusing systems using a combination of solenoids and/or quadrupoles are optimized to compensate for chromatic aberrations. Lens aberrations and emittance growth will be investigated, and analytical results will be compared with results from numerical particle-in-cell (PIC) simulation codes.

  2. Space Physics Data Facility Web Services

    NASA Technical Reports Server (NTRS)

    Candey, Robert M.; Harris, Bernard T.; Chimiak, Reine A.

    2005-01-01

    The Space Physics Data Facility (SPDF) Web services provides a distributed programming interface to a portion of the SPDF software. (A general description of Web services is available at http://www.w3.org/ and in many current software-engineering texts and articles focused on distributed programming.) The SPDF Web services distributed programming interface enables additional collaboration and integration of the SPDF software system with other software systems, in furtherance of the SPDF mission to lead collaborative efforts in the collection and utilization of space physics data and mathematical models. This programming interface conforms to all applicable Web services specifications of the World Wide Web Consortium. The interface is specified by a Web Services Description Language (WSDL) file. The SPDF Web services software consists of the following components: 1) A server program for implementation of the Web services; and 2) A software developer s kit that consists of a WSDL file, a less formal description of the interface, a Java class library (which further eases development of Java-based client software), and Java source code for an example client program that illustrates the use of the interface.

  3. A general modeling framework for describing spatially structured population dynamics

    USGS Publications Warehouse

    Sample, Christine; Fryxell, John; Bieri, Joanna; Federico, Paula; Earl, Julia; Wiederholt, Ruscena; Mattsson, Brady; Flockhart, Tyler; Nicol, Sam; Diffendorfer, James E.; Thogmartin, Wayne E.; Erickson, Richard A.; Norris, D. Ryan

    2017-01-01

    Variation in movement across time and space fundamentally shapes the abundance and distribution of populations. Although a variety of approaches model structured population dynamics, they are limited to specific types of spatially structured populations and lack a unifying framework. Here, we propose a unified network-based framework sufficiently novel in its flexibility to capture a wide variety of spatiotemporal processes including metapopulations and a range of migratory patterns. It can accommodate different kinds of age structures, forms of population growth, dispersal, nomadism and migration, and alternative life-history strategies. Our objective was to link three general elements common to all spatially structured populations (space, time and movement) under a single mathematical framework. To do this, we adopt a network modeling approach. The spatial structure of a population is represented by a weighted and directed network. Each node and each edge has a set of attributes which vary through time. The dynamics of our network-based population is modeled with discrete time steps. Using both theoretical and real-world examples, we show how common elements recur across species with disparate movement strategies and how they can be combined under a unified mathematical framework. We illustrate how metapopulations, various migratory patterns, and nomadism can be represented with this modeling approach. We also apply our network-based framework to four organisms spanning a wide range of life histories, movement patterns, and carrying capacities. General computer code to implement our framework is provided, which can be applied to almost any spatially structured population. This framework contributes to our theoretical understanding of population dynamics and has practical management applications, including understanding the impact of perturbations on population size, distribution, and movement patterns. By working within a common framework, there is less chance that comparative analyses are colored by model details rather than general principles

  4. Parallelization Issues and Particle-In Codes.

    NASA Astrophysics Data System (ADS)

    Elster, Anne Cathrine

    1994-01-01

    "Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on the simulation may lead to further improvements. For example, in the case of mean particle drift, it is often advantageous to partition the grid primarily along the direction of the drift. The particle-in-cell codes for this study were tested using physical parameters, which lead to predictable phenomena including plasma oscillations and two-stream instabilities. An overview of the most central references related to parallel particle codes is also given.

  5. A Rendering System Independent High Level Architecture Implementation for Networked Virtual Environments

    DTIC Science & Technology

    2002-09-01

    Management .........................15 5. Time Management ..............................16 6. Data Distribution Management .................16 D...50 b. Ownership Management .....................51 c. Data Distribution Management .............51 2. Additional Objects and Interactions...16 Figure 6. Data Distribution Management . (From: ref. 2) ...16 Figure 7. RTI and Federate Code Responsibilities. (From: ref. 2

  6. Exposure calculation code module for reactor core analysis: BURNER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.; Cunningham, G.W.

    1979-02-01

    The code module BURNER for nuclear reactor exposure calculations is presented. The computer requirements are shown, as are the reference data and interface data file requirements, and the programmed equations and procedure of calculation are described. The operating history of a reactor is followed over the period between solutions of the space, energy neutronics problem. The end-of-period nuclide concentrations are determined given the necessary information. A steady state, continuous fueling model is treated in addition to the usual fixed fuel model. The control options provide flexibility to select among an unusually wide variety of programmed procedures. The code also providesmore » user option to make a number of auxiliary calculations and print such information as the local gamma source, cumulative exposure, and a fine scale power density distribution in a selected zone. The code is used locally in a system for computation which contains the VENTURE diffusion theory neutronics code and other modules.« less

  7. Real-space processing of helical filaments in SPARX

    PubMed Central

    Behrmann, Elmar; Tao, Guozhi; Stokes, David L.; Egelman, Edward H.; Raunser, Stefan; Penczek, Pawel A.

    2012-01-01

    We present a major revision of the iterative helical real-space refinement (IHRSR) procedure and its implementation in the SPARX single particle image processing environment. We built on over a decade of experience with IHRSR helical structure determination and we took advantage of the flexible SPARX infrastructure to arrive at an implementation that offers ease of use, flexibility in designing helical structure determination strategy, and high computational efficiency. We introduced the 3D projection matching code which now is able to work with non-cubic volumes, the geometry better suited for long helical filaments, we enhanced procedures for establishing helical symmetry parameters, and we parallelized the code using distributed memory paradigm. Additional feature includes a graphical user interface that facilitates entering and editing of parameters controlling the structure determination strategy of the program. In addition, we present a novel approach to detect and evaluate structural heterogeneity due to conformer mixtures that takes advantage of helical structure redundancy. PMID:22248449

  8. Supercomputing Aspects for Simulating Incompressible Flow

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kris, Cetin C.

    2000-01-01

    The primary objective of this research is to support the design of liquid rocket systems for the Advanced Space Transportation System. Since the space launch systems in the near future are likely to rely on liquid rocket engines, increasing the efficiency and reliability of the engine components is an important task. One of the major problems in the liquid rocket engine is to understand fluid dynamics of fuel and oxidizer flows from the fuel tank to plume. Understanding the flow through the entire turbo-pump geometry through numerical simulation will be of significant value toward design. One of the milestones of this effort is to develop, apply and demonstrate the capability and accuracy of 3D CFD methods as efficient design analysis tools on high performance computer platforms. The development of the Message Passage Interface (MPI) and Multi Level Parallel (MLP) versions of the INS3D code is currently underway. The serial version of INS3D code is a multidimensional incompressible Navier-Stokes solver based on overset grid technology, INS3D-MPI is based on the explicit massage-passing interface across processors and is primarily suited for distributed memory systems. INS3D-MLP is based on multi-level parallel method and is suitable for distributed-shared memory systems. For the entire turbo-pump simulations, moving boundary capability and efficient time-accurate integration methods are built in the flow solver, To handle the geometric complexity and moving boundary problems, an overset grid scheme is incorporated with the solver so that new connectivity data will be obtained at each time step. The Chimera overlapped grid scheme allows subdomains move relative to each other, and provides a great flexibility when the boundary movement creates large displacements. Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two methods is compared by obtaining unsteady solutions for the evolution of twin vortices behind a flat plate. Calculated results are compared with experimental and other numerical results. For an unsteady flow, which requires small physical time step, the pressure projection method was found to be computationally efficient since it does not require any subiteration procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy the incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in present computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  9. Load-Following Power Timeline Analyses for the International Space Station

    NASA Technical Reports Server (NTRS)

    Fincannon, James; Delleur, Ann; Green, Robert; Hojnicki, Jeffrey

    1996-01-01

    Spacecraft are typically complex assemblies of interconnected systems and components that have highly time-varying thermal communications, and power requirements. It is essential that systems designers be able to assess the capability of the spacecraft to meet these requirements which should represent a realistic projection of demand for these resources once the vehicle is on-orbit. To accomplish the assessment from the power standpoint, a computer code called ECAPS has been developed at NASA Lewis Research Center that performs a load-driven analysis of a spacecraft power system given time-varying distributed loading and other mission data. This program is uniquely capable of synthesizing all of the changing spacecraft conditions into a single, seamless analysis for a complete mission. This paper presents example power load timelines with which numerous data are integrated to provide a realistic assessment of the load-following capabilities of the power system. Results of analyses show how well the power system can meet the time-varying power resource demand.

  10. Synthesizing spatiotemporally sparse smartphone sensor data for bridge modal identification

    NASA Astrophysics Data System (ADS)

    Ozer, Ekin; Feng, Maria Q.

    2016-08-01

    Smartphones as vibration measurement instruments form a large-scale, citizen-induced, and mobile wireless sensor network (WSN) for system identification and structural health monitoring (SHM) applications. Crowdsourcing-based SHM is possible with a decentralized system granting citizens with operational responsibility and control. Yet, citizen initiatives introduce device mobility, drastically changing SHM results due to uncertainties in the time and the space domains. This paper proposes a modal identification strategy that fuses spatiotemporally sparse SHM data collected by smartphone-based WSNs. Multichannel data sampled with the time and the space independence is used to compose the modal identification parameters such as frequencies and mode shapes. Structural response time history can be gathered by smartphone accelerometers and converted into Fourier spectra by the processor units. Timestamp, data length, energy to power conversion address temporal variation, whereas spatial uncertainties are reduced by geolocation services or determining node identity via QR code labels. Then, parameters collected from each distributed network component can be extended to global behavior to deduce modal parameters without the need of a centralized and synchronous data acquisition system. The proposed method is tested on a pedestrian bridge and compared with a conventional reference monitoring system. The results show that the spatiotemporally sparse mobile WSN data can be used to infer modal parameters despite non-overlapping sensor operation schedule.

  11. Description of real-time Ada software implementation of a power system monitor for the Space Station Freedom PMAD DC testbed

    NASA Technical Reports Server (NTRS)

    Ludwig, Kimberly; Mackin, Michael; Wright, Theodore

    1991-01-01

    The authors describe the Ada language software developed to perform the electrical power system monitoring functions for the NASA Lewis Research Center's Power Management and Distribution (PMAD) DC testbed. The results of the effort to implement this monitor are presented. The PMAD DC testbed is a reduced-scale prototype of the electric power system to be used in Space Station Freedom. The power is controlled by smart switches known as power control components (or switchgear). The power control components are currently coordinated by five Compaq 386/20e computers connected through an 802.4 local area network. The power system monitor algorithm comprises several functions, including periodic data acquisition, data smoothing, system performance analysis, and status reporting. Data are collected from the switchgear sensors every 100 ms, then passed through a 2-Hz digital filter. System performance analysis includes power interruption and overcurrent detection. The system monitor required a hardware timer interrupt to activate the data acquisition function. The execution time of the code was optimized by using an assembly language routine. The routine allows direct vectoring of the processor to Ada language procedures that perform periodic control activities.

  12. Finite-difference fluid dynamics computer mathematical models for the design and interpretation of experiments for space flight. [atmospheric general circulation experiment, convection in a float zone, and the Bridgman-Stockbarger crystal growing system

    NASA Technical Reports Server (NTRS)

    Roberts, G. O.; Fowlis, W. W.; Miller, T. L.

    1984-01-01

    Numerical methods are used to design a spherical baroclinic flow model experiment of the large scale atmosphere flow for Spacelab. The dielectric simulation of radial gravity is only dominant in a low gravity environment. Computer codes are developed to study the processes at work in crystal growing systems which are also candidates for space flight. Crystalline materials rarely achieve their potential properties because of imperfections and component concentration variations. Thermosolutal convection in the liquid melt can be the cause of these imperfections. Such convection is suppressed in a low gravity environment. Two and three dimensional finite difference codes are being used for this work. Nonuniform meshes and implicit iterative methods are used. The iterative method for steady solutions is based on time stepping but has the options of different time steps for velocity and temperature and of a time step varying smoothly with position according to specified powers of the mesh spacings. This allows for more rapid convergence. The code being developed for the crystal growth studies allows for growth of the crystal as the solid-liquid interface. The moving interface is followed using finite differences; shape variations are permitted. For convenience in applying finite differences in the solid and liquid, a time dependent coordinate transformation is used to make this interface a coordinate surface.

  13. Analysis of the Space Propulsion System Problem Using RAVEN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    diego mandelli; curtis smith; cristian rabiti

    This paper presents the solution of the space propulsion problem using a PRA code currently under development at Idaho National Laboratory (INL). RAVEN (Reactor Analysis and Virtual control ENviroment) is a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities. It is designed to derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures) and to perform both Monte- Carlo sampling of random distributed events and Event Tree based analysis. In order to facilitate the input/output handling, a Graphical User Interface (GUI) and a post-processing data-mining module are available.more » RAVEN allows also to interface with several numerical codes such as RELAP5 and RELAP-7 and ad-hoc system simulators. For the space propulsion system problem, an ad-hoc simulator has been developed and written in python language and then interfaced to RAVEN. Such simulator fully models both deterministic (e.g., system dynamics and interactions between system components) and stochastic behaviors (i.e., failures of components/systems such as distribution lines and thrusters). Stochastic analysis is performed using random sampling based methodologies (i.e., Monte-Carlo). Such analysis is accomplished to determine both the reliability of the space propulsion system and to propagate the uncertainties associated to a specific set of parameters. As also indicated in the scope of the benchmark problem, the results generated by the stochastic analysis are used to generate risk-informed insights such as conditions under witch different strategy can be followed.« less

  14. User's manual for three dimensional FDTD version D code for scattering from frequency-dependent dielectric and magnetic materials

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.

    1992-01-01

    The Penn State Finite Difference Time Domain Electromagnetic Scattering Code version D is a 3-D numerical electromagnetic scattering code based upon the finite difference time domain technique (FDTD). The manual provides a description of the code and corresponding results for several scattering problems. The manual is organized into 14 sections: introduction; description of the FDTD method; operation; resource requirements; version D code capabilities; a brief description of the default scattering geometry; a brief description of each subroutine; a description of the include file; a section briefly discussing Radar Cross Section computations; a section discussing some scattering results; a sample problem setup section; a new problem checklist; references and figure titles. The FDTD technique models transient electromagnetic scattering and interactions with objects of arbitrary shape and/or material composition. In the FDTD method, Maxwell's curl equations are discretized in time-space and all derivatives (temporal and spatial) are approximated by central differences.

  15. Analytical and Experimental Evaluation of the Heat Transfer Distribution over the Surfaces of Turbine Vanes

    NASA Technical Reports Server (NTRS)

    Hylton, L. D.; Mihelc, M. S.; Turner, E. R.; Nealy, D. A.; York, R. E.

    1983-01-01

    Three airfoil data sets were selected for use in evaluating currently available analytical models for predicting airfoil surface heat transfer distributions in a 2-D flow field. Two additional airfoils, representative of highly loaded, low solidity airfoils currently being designed, were selected for cascade testing at simulated engine conditions. Some 2-D analytical methods were examined and a version of the STAN5 boundary layer code was chosen for modification. The final form of the method utilized a time dependent, transonic inviscid cascade code coupled to a modified version of the STAN5 boundary layer code featuring zero order turbulence modeling. The boundary layer code is structured to accommodate a full spectrum of empirical correlations addressing the coupled influences of pressure gradient, airfoil curvature, and free-stream turbulence on airfoil surface heat transfer distribution and boundary layer transitional behavior. Comparison of pedictions made with the model to the data base indicates a significant improvement in predictive capability.

  16. Analytical and experimental evaluation of the heat transfer distribution over the surfaces of turbine vanes

    NASA Astrophysics Data System (ADS)

    Hylton, L. D.; Mihelc, M. S.; Turner, E. R.; Nealy, D. A.; York, R. E.

    1983-05-01

    Three airfoil data sets were selected for use in evaluating currently available analytical models for predicting airfoil surface heat transfer distributions in a 2-D flow field. Two additional airfoils, representative of highly loaded, low solidity airfoils currently being designed, were selected for cascade testing at simulated engine conditions. Some 2-D analytical methods were examined and a version of the STAN5 boundary layer code was chosen for modification. The final form of the method utilized a time dependent, transonic inviscid cascade code coupled to a modified version of the STAN5 boundary layer code featuring zero order turbulence modeling. The boundary layer code is structured to accommodate a full spectrum of empirical correlations addressing the coupled influences of pressure gradient, airfoil curvature, and free-stream turbulence on airfoil surface heat transfer distribution and boundary layer transitional behavior. Comparison of pedictions made with the model to the data base indicates a significant improvement in predictive capability.

  17. From Earth to Mars, Radiation Intensities in Interplanetary Space

    NASA Astrophysics Data System (ADS)

    O'Brien, Keran

    2007-10-01

    The radiation field in interplanetary space between Earth and Mars is rather intense. Using a modified version of the ATROPOS Monte Carlo code combined with a modified version of the deterministic code, PLOTINUS, the effective dose rate to crew members in space craft hull shielded with a shell of 2 g/cm^2 of aluminum and 20 g/cm^2 of polyethylene was calculated to be 51 rem/y. The total dose during the solar-particle event of September 29, 1989, GLE 42, was calculated to be 50 rem. The dose in a ``storm cellar'' of 100 g/cm^2 of polyethylene equivalent during this time was calculated to be 5 rem. The calculations were for conditions corresponding to a recent solar minimum.

  18. Principal Investigator Microgravity Services Role in ISS Acceleration Data Distribution

    NASA Technical Reports Server (NTRS)

    McPherson, Kevin

    1999-01-01

    Measurement of the microgravity acceleration environment on the International Space Station will be accomplished by two accelerometer systems. The Microgravity Acceleration Measurement System will record the quasi-steady microgravity environment, including the influences of aerodynamic drag, vehicle rotation, and venting effects. Measurement of the vibratory/transient regime comprised of vehicle, crew, and equipment disturbances will be accomplished by the Space Acceleration Measurement System-II. Due to the dynamic nature of the microgravity environment and its potential to influence sensitive experiments, Principal Investigators require distribution of microgravity acceleration in a timely and straightforward fashion. In addition to this timely distribution of the data, long term access to International Space Station microgravity environment acceleration data is required. The NASA Glenn Research Center's Principal Investigator Microgravity Services project will provide the means for real-time and post experiment distribution of microgravity acceleration data to microgravity science Principal Investigators. Real-time distribution of microgravity environment acceleration data will be accomplished via the World Wide Web. Data packets from the Microgravity Acceleration Measurement System and the Space Acceleration Measurement System-II will be routed from onboard the International Space Station to the NASA Glenn Research Center's Telescience Support Center. Principal Investigator Microgravity Services' ground support equipment located at the Telescience Support Center will be capable of generating a standard suite of acceleration data displays, including various time domain and frequency domain options. These data displays will be updated in real-time and will periodically update images available via the Principal Investigator Microgravity Services web page.

  19. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; hide

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical schemes. Depending on the application, we find that different time stepping methods are optimal. Several of the time integration schemes exploit the block-based granularity of the grid structure. The framework and the adaptive algorithms enable physics based space weather modeling and even forecasting.

  20. Inter-comparison of Dose Distributions Calculated by FLUKA, GEANT4, MCNP, and PHITS for Proton Therapy

    NASA Astrophysics Data System (ADS)

    Yang, Zi-Yi; Tsai, Pi-En; Lee, Shao-Chun; Liu, Yen-Chiang; Chen, Chin-Cheng; Sato, Tatsuhiko; Sheu, Rong-Jiun

    2017-09-01

    The dose distributions from proton pencil beam scanning were calculated by FLUKA, GEANT4, MCNP, and PHITS, in order to investigate their applicability in proton radiotherapy. The first studied case was the integrated depth dose curves (IDDCs), respectively from a 100 and a 226-MeV proton pencil beam impinging a water phantom. The calculated IDDCs agree with each other as long as each code employs 75 eV for the ionization potential of water. The second case considered a similar condition of the first case but with proton energies in a Gaussian distribution. The comparison to the measurement indicates the inter-code differences might not only due to different stopping power but also the nuclear physics models. How the physics parameter setting affect the computation time was also discussed. In the third case, the applicability of each code for pencil beam scanning was confirmed by delivering a uniform volumetric dose distribution based on the treatment plan, and the results showed general agreement between each codes, the treatment plan, and the measurement, except that some deviations were found in the penumbra region. This study has demonstrated that the selected codes are all capable of performing dose calculations for therapeutic scanning proton beams with proper physics settings.

  1. SpeciesGeoCoder: Fast Categorization of Species Occurrences for Analyses of Biodiversity, Biogeography, Ecology, and Evolution.

    PubMed

    Töpel, Mats; Zizka, Alexander; Calió, Maria Fernanda; Scharn, Ruud; Silvestro, Daniele; Antonelli, Alexandre

    2017-03-01

    Understanding the patterns and processes underlying the uneven distribution of biodiversity across space constitutes a major scientific challenge in systematic biology and biogeography, which largely relies on effectively mapping and making sense of rapidly increasing species occurrence data. There is thus an urgent need for making the process of coding species into spatial units faster, automated, transparent, and reproducible. Here we present SpeciesGeoCoder, an open-source software package written in Python and R, that allows for easy coding of species into user-defined operational units. These units may be of any size and be purely spatial (i.e., polygons) such as countries and states, conservation areas, biomes, islands, biodiversity hotspots, and areas of endemism, but may also include elevation ranges. This flexibility allows scoring species into complex categories, such as those encountered in topographically and ecologically heterogeneous landscapes. In addition, SpeciesGeoCoder can be used to facilitate sorting and cleaning of occurrence data obtained from online databases, and for testing the impact of incorrect identification of specimens on the spatial coding of species. The various outputs of SpeciesGeoCoder include quantitative biodiversity statistics, global and local distribution maps, and files that can be used directly in many phylogeny-based applications for ancestral range reconstruction, investigations of biome evolution, and other comparative methods. Our simulations indicate that even datasets containing hundreds of millions of records can be analyzed in relatively short time using a standard computer. We exemplify the use of SpeciesGeoCoder by inferring the historical dispersal of birds across the Isthmus of Panama, showing that lowland species crossed the Isthmus about twice as frequently as montane species with a marked increase in the number of dispersals during the last 10 million years. [ancestral area reconstruction; biodiversity patterns; ecology; evolution; point in polygon; species distribution data.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  2. Monte Carlo MCNP-4B-based absorbed dose distribution estimates for patient-specific dosimetry.

    PubMed

    Yoriyaz, H; Stabin, M G; dos Santos, A

    2001-04-01

    This study was intended to verify the capability of the Monte Carlo MCNP-4B code to evaluate spatial dose distribution based on information gathered from CT or SPECT. A new three-dimensional (3D) dose calculation approach for internal emitter use in radioimmunotherapy (RIT) was developed using the Monte Carlo MCNP-4B code as the photon and electron transport engine. It was shown that the MCNP-4B computer code can be used with voxel-based anatomic and physiologic data to provide 3D dose distributions. This study showed that the MCNP-4B code can be used to develop a treatment planning system that will provide such information in a time manner, if dose reporting is suitably optimized. If each organ is divided into small regions where the average energy deposition is calculated with a typical volume of 0.4 cm(3), regional dose distributions can be provided with reasonable central processing unit times (on the order of 12-24 h on a 200-MHz personal computer or modest workstation). Further efforts to provide semiautomated region identification (segmentation) and improvement of marrow dose calculations are needed to supply a complete system for RIT. It is envisioned that all such efforts will continue to develop and that internal dose calculations may soon be brought to a similar level of accuracy, detail, and robustness as is commonly expected in external dose treatment planning. For this study we developed a code with a user-friendly interface that works on several nuclear medicine imaging platforms and provides timely patient-specific dose information to the physician and medical physicist. Future therapy with internal emitters should use a 3D dose calculation approach, which represents a significant advance over dose information provided by the standard geometric phantoms used for more than 20 y (which permit reporting of only average organ doses for certain standardized individuals)

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. Alfonsi; C. Rabiti; D. Mandelli

    The Reactor Analysis and Virtual control ENviroment (RAVEN) code is a software tool that acts as the control logic driver and post-processing engine for the newly developed Thermal-Hydraulic code RELAP-7. RAVEN is now a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities: Derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures), allowing on-line monitoring/controlling in the Phase Space Perform both Monte-Carlo sampling of random distributed events and Dynamic Event Tree based analysis Facilitate the input/output handling through a Graphical User Interface (GUI) and a post-processing data miningmore » module« less

  4. Versatile fusion source integrator AFSI for fast ion and neutron studies in fusion devices

    NASA Astrophysics Data System (ADS)

    Sirén, Paula; Varje, Jari; Äkäslompolo, Simppa; Asunta, Otto; Giroud, Carine; Kurki-Suonio, Taina; Weisen, Henri; JET Contributors, The

    2018-01-01

    ASCOT Fusion Source Integrator AFSI, an efficient tool for calculating fusion reaction rates and characterizing the fusion products, based on arbitrary reactant distributions, has been developed and is reported in this paper. Calculation of reactor-relevant D-D, D-T and D-3He fusion reactions has been implemented based on the Bosch-Hale fusion cross sections. The reactions can be calculated between arbitrary particle populations, including Maxwellian thermal particles and minority energetic particles. Reaction rate profiles, energy spectra and full 4D phase space distributions can be calculated for the non-isotropic reaction products. The code is especially suitable for integrated modelling in self-consistent plasma physics simulations as well as in the Serpent neutronics calculation chain. Validation of the model has been performed for neutron measurements at the JET tokamak and the code has been applied to predictive simulations in ITER.

  5. Software Certification and Software Certificate Management Systems

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd

    2005-01-01

    Incremental certification and re-certification of code as it is developed and modified is a prerequisite for applying modem, evolutionary development processes, which are especially relevant for NASA. For example, the Columbia Accident Investigation Board (CAIB) report 121 concluded there is "the need for improved and uniform statistical sampling, audit, and certification processes". Also, re-certification time has been a limiting factor in making changes to Space Shuttle code close to launch time. This is likely to be an even bigger problem with the rapid turnaround required in developing NASA s replacement for the Space Shuttle, the Crew Exploration Vehicle (CEV). Hence, intelligent development processes are needed which place certification at the center of development. If certification tools provide useful information, such as estimated time and effort, they are more likely to be adopted. The ultimate impact of such a tool will be reduced effort and increased reliability.

  6. Hubble confirms cosmic acceleration with weak lensing

    NASA Image and Video Library

    2017-12-08

    NASA/ESA Hubble Release Date: March 25, 2010 This image shows a smoothed reconstruction of the total (mostly dark) matter distribution in the COSMOS field, created from data taken by the NASA/ESA Hubble Space Telescope and ground-based telescopes. It was inferred from the weak gravitational lensing distortions that are imprinted onto the shapes of background galaxies. The colour coding indicates the distance of the foreground mass concentrations as gathered from the weak lensing effect. Structures shown in white, cyan, and green are typically closer to us than those indicated in orange and red. To improve the resolution of the map, data from galaxies both with and without redshift information were used. The new study presents the most comprehensive analysis of data from the COSMOS survey. The researchers have, for the first time ever, used Hubble and the natural "weak lenses" in space to characterise the accelerated expansion of the Universe. Credit: NASA, ESA, P. Simon (University of Bonn) and T. Schrabback (Leiden Observatory) To learn more abou this image go to: www.spacetelescope.org/news/html/heic1005.html For more information about Goddard Space Flight Center go here: www.nasa.gov/centers/goddard/home/index.html

  7. Rigorous study of low-complexity adaptive space-time block-coded MIMO receivers in high-speed mode multiplexed fiber-optic transmission links using few-mode fibers

    NASA Astrophysics Data System (ADS)

    Weng, Yi; He, Xuan; Wang, Junyi; Pan, Zhongqi

    2017-01-01

    Spatial-division multiplexing (SDM) techniques have been purposed to increase the capacity of optical fiber transmission links by utilizing multicore fibers or few-mode fibers (FMF). The most challenging impairments of SDMbased long-haul optical links mainly include modal dispersion and mode-dependent loss (MDL), whereas MDL arises from inline component imperfections, and breaks modal orthogonality thus degrading the capacity of multiple-inputmultiple- output (MIMO) receivers. To reduce MDL, optical approaches include mode scramblers and specialty fiber designs, yet these methods were burdened with high cost, yet cannot completely remove the accumulated MDL in the link. Besides, space-time trellis codes (STTC) were purposed to lessen MDL, but suffered from high complexity. In this work, we investigated the performance of space-time block-coding (STBC) scheme to mitigate MDL in SDM-based optical communication by exploiting space and delay diversity, whereas weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive-least-squares (RLS) algorithm for convergence and channel estimation. The STBC was evaluated in a six-mode multiplexed system over 30-km FMF via 6×6 MIMO FDE, with modal gain offset 3 dB, core refractive index 1.49, numerical aperture 0.5. Results show that optical-signal-to-noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16- and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE). Besides, we also evaluate the complexity optimization of STBC decoding scheme with zero-forcing decision feedback (ZFDF) equalizer by shortening the coding slot length, which is robust to frequency-selective fading channels, and can be scaled up for SDM systems with more dynamic channels.

  8. 29 CFR 4041.22 - Administration of plan during pendency of termination process.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... out the normal operations of the plan. During that time period, except as provided in paragraph (b) of... Code to receive the distribution; (2) The distribution is consistent with prior plan practice; and (3) The distribution is not reasonably expected to jeopardize the plan's sufficiency for plan benefits. ...

  9. DebriSat Hypervelocity Impact Test

    DTIC Science & Technology

    2015-08-01

    material. The foam was also color coded to assist in determining the location of various loose foam pieces found posttest . Details on the layer... pretest and posttest shots. AEDC-TR-15-S-2 23 Statement A: Approved by public release; distribution unlimited. APPENDIX B. SOFT CATCH FOAM CONFIGURATION ...spacecraft. One of the major hazards is hypervelocity impacts from uncontrolled, man-made space debris. Arnold Engineering Development Complex

  10. Modeling and Simulations on the Effects of Shortwave Energy on Micropartile and Nanoparticle Filled Composites

    DTIC Science & Technology

    2014-06-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited MODELING AND... Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction...12a. DISTRIBUTION / AVAILABILITY STATEMENT Approved for public release; distribution is unlimited 12b. DISTRIBUTION CODE A 13. ABSTRACT

  11. Remote sensing and modeling of lightning caused long recovery events within the lower ionosphere using VLF/LF radio wave propagation

    NASA Astrophysics Data System (ADS)

    Schmitter, E. D.

    2014-11-01

    On the 4 November 2012 at 3:04:27 UT a strong lightning in the midst of the North Sea affected the propagation conditions of VLF/LF transmitter radio signals from NRK (Iceland, 37.5 kHz) and GBZ (UK, 19.58 kHz) received at 5246° N 8° E (NW Germany). The amplitude and phase dips show a recovery time of 6-12 min pointing to a LOng Recovery Early VLF (LORE) event. Clear assignment of the causative return stroke in space and time was possible with data from the WWLLN (Worldwide Lightning Location Network). Based on a return stroke current model the electric field is calculated and an excess electron density distribution which decays over time in the lower ionosphere is derived. Ionization, attachment and recombination processes are modeled in detail. Entering the electron density distribution in VLF/LF radio wave propagation calculations using the LWPC (Long Wavelength Propagation Capability) code allows to model the VLF/LF amplitude and phase behavior by adjusting the return stroke current moment. The results endorse and quantify the conception of lower ionosphere EMP heating by strong - but not necessarily extremely strong - return strokes of both polarities.

  12. New ATCA, ALMA and VISIR observations of the candidate LBV SK -67 266 (S61): the nebular mass from modelling 3D density distributions

    NASA Astrophysics Data System (ADS)

    Agliozzo, C.; Nikutta, R.; Pignata, G.; Phillips, N. M.; Ingallinera, A.; Buemi, C.; Umana, G.; Leto, P.; Trigilio, C.; Noriega-Crespo, A.; Paladini, R.; Bufano, F.; Cavallaro, F.

    2017-04-01

    We present new observations of the nebula around the Magellanic candidate Luminous Blue Variable S61. These comprise high-resolution data acquired with the Australia Telescope Compact Array (ATCA), the Atacama Large Millimetre/Submillimetre Array (ALMA), and the VLT Imager and Spectrometer for mid Infrared (VISIR) at the Very Large Telescope. The nebula was detected only in the radio, up to 17 GHz. The 17 GHz ATCA map, with 0.8 arcsec resolution, allowed a morphological comparison with the Hα Hubble Space Telescope image. The radio nebula resembles a spherical shell, as in the optical. The spectral index map indicates that the radio emission is due to free-free transitions in the ionized, optically thin gas, but there are hints of inhomogeneities. We present our new public code RHOCUBE to model 3D density distributions and determine via Bayesian inference the nebula's geometric parameters. We applied the code to model the electron density distribution in the S61 nebula. We found that different distributions fit the data, but all of them converge to the same ionized mass, ˜ 0.1 M⊙, which is an order of magnitude smaller than previous estimates. We show how the nebula models can be used to derive the mass-loss history with high-temporal resolution. The nebula was probably formed through stellar winds, rather than eruptions. From the ALMA and VISIR non-detections, plus the derived extinction map, we deduce that the infrared emission observed by space telescopes must arise from extended, diffuse dust within the ionized region.

  13. Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions

    DOE PAGES

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...

    2015-11-01

    Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less

  14. Spectral and Structure Modeling of Low and High Mass Young Stars Using a Radiative Trasnfer Code

    NASA Astrophysics Data System (ADS)

    Robson Rocha, Will; Pilling, Sergio

    The spectroscopy data from space telescopes (ISO, Spitzer, Herchel) shows that in addition to dust grains (e.g. silicates), there is also the presence of the frozen molecular species (astrophysical ices, such as H _{2}O, CO, CO _{2}, CH _{3}OH) in the circumstellar environments. In this work we present a study of the modeling of low and high mass young stellar objects (YSOs), where we highlight the importance in the use of the astrophysical ices processed by the radiation (UV, cosmic rays) comes from stars in formation process. This is important to characterize the physicochemical evolution of the ices distributed by the protostellar disk and its envelope in some situations. To perform this analysis, we gathered (i) observational data from Infrared Space Observatory (ISO) related with low mass protostar Elias29 and high mass protostar W33A, (ii) absorbance experimental data in the infrared spectral range used to determinate the optical constants of the materials observed around this objects and (iii) a powerful radiative transfer code to simulate the astrophysical environment (RADMC-3D, Dullemond et al, 2012). Briefly, the radiative transfer calculation of the YSOs was done employing the RADMC-3D code. The model outputs were the spectral energy distribution and theoretical images in different wavelengths of the studied objects. The functionality of this code is based on the Monte Carlo methodology in addition to Mie theory for interaction among radiation and matter. The observational data from different space telescopes was used as reference for comparison with the modeled data. The optical constants in the infrared, used as input in the models, were calculated directly from absorbance data obtained in the laboratory of both unprocessed and processed simulated interstellar samples by using NKABS code (Rocha & Pilling 2014). We show from this study that some absorption bands in the infrared, observed in the spectrum of Elias29 and W33A can arises after the ices around the protostars were processed by the radiation comes from central object. In addition, we were able also to compare the observational data for this two objects with those obtained in the modeling. Authors would like to thanks the agencies FAPESP (JP#2009/18304-0 and PHD#2013/07657-5).

  15. Improve load balancing and coding efficiency of tiles in high efficiency video coding by adaptive tile boundary

    NASA Astrophysics Data System (ADS)

    Chan, Chia-Hsin; Tu, Chun-Chuan; Tsai, Wen-Jiin

    2017-01-01

    High efficiency video coding (HEVC) not only improves the coding efficiency drastically compared to the well-known H.264/AVC but also introduces coding tools for parallel processing, one of which is tiles. Tile partitioning is allowed to be arbitrary in HEVC, but how to decide tile boundaries remains an open issue. An adaptive tile boundary (ATB) method is proposed to select a better tile partitioning to improve load balancing (ATB-LoadB) and coding efficiency (ATB-Gain) with a unified scheme. Experimental results show that, compared to ordinary uniform-space partitioning, the proposed ATB can save up to 17.65% of encoding times in parallel encoding scenarios and can reduce up to 0.8% of total bit rates for coding efficiency.

  16. High-Contrast Near-Infrared Imaging Polarimetry of the Protoplanetary Disk around RY Tau

    NASA Technical Reports Server (NTRS)

    Takami, Michihiro; Karr, Jennifer L.; Hashimoto, Jun; Kim, Hyosun; Wisenewski, John; Henning, Thomas; Grady, Carol; Kandori, Ryo; Hodapp, Klaus W.; Kudo, Tomoyuki; hide

    2013-01-01

    We present near-infrared coronagraphic imaging polarimetry of RY Tau. The scattered light in the circumstellar environment was imaged at H-band at a high resolution (approx. 0.05) for the first time, using Subaru-HiCIAO. The observed polarized intensity (PI) distribution shows a butterfly-like distribution of bright emission with an angular scale similar to the disk observed at millimeter wavelengths. This distribution is offset toward the blueshifted jet, indicating the presence of a geometrically thick disk or a remnant envelope, and therefore the earliest stage of the Class II evolutionary phase. We perform comparisons between the observed PI distribution and disk models with: (1) full radiative transfer code, using the spectral energy distribution (SED) to constrain the disk parameters; and (2) monochromatic simulations of scattered light which explore a wide range of parameters space to constrain the disk and dust parameters. We show that these models cannot consistently explain the observed PI distribution, SED, and the viewing angle inferred by millimeter interferometry. We suggest that the scattered light in the near-infrared is associated with an optically thin and geometrically thick layer above the disk surface, with the surface responsible for the infrared SED. Half of the scattered light and thermal radiation in this layer illuminates the disk surface, and this process may significantly affect the thermal structure of the disk.

  17. Incremental Parallelization of Non-Data-Parallel Programs Using the Charon Message-Passing Library

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.

    2000-01-01

    Message passing is among the most popular techniques for parallelizing scientific programs on distributed-memory architectures. The reasons for its success are wide availability (MPI), efficiency, and full tuning control provided to the programmer. A major drawback, however, is that incremental parallelization, as offered by compiler directives, is not generally possible, because all data structures have to be changed throughout the program simultaneously. Charon remedies this situation through mappings between distributed and non-distributed data. It allows breaking up the parallelization into small steps, guaranteeing correctness at every stage. Several tools are available to help convert legacy codes into high-performance message-passing programs. They usually target data-parallel applications, whose loops carrying most of the work can be distributed among all processors without much dependency analysis. Others do a full dependency analysis and then convert the code virtually automatically. Even more toolkits are available that aid construction from scratch of message passing programs. None, however, allows piecemeal translation of codes with complex data dependencies (i.e. non-data-parallel programs) into message passing codes. The Charon library (available in both C and Fortran) provides incremental parallelization capabilities by linking legacy code arrays with distributed arrays. During the conversion process, non-distributed and distributed arrays exist side by side, and simple mapping functions allow the programmer to switch between the two in any location in the program. Charon also provides wrapper functions that leave the structure of the legacy code intact, but that allow execution on truly distributed data. Finally, the library provides a rich set of communication functions that support virtually all patterns of remote data demands in realistic structured grid scientific programs, including transposition, nearest-neighbor communication, pipelining, gather/scatter, and redistribution. At the end of the conversion process most intermediate Charon function calls will have been removed, the non-distributed arrays will have been deleted, and virtually the only remaining Charon functions calls are the high-level, highly optimized communications. Distribution of the data is under complete control of the programmer, although a wide range of useful distributions is easily available through predefined functions. A crucial aspect of the library is that it does not allocate space for distributed arrays, but accepts programmer-specified memory. This has two major consequences. First, codes parallelized using Charon do not suffer from encapsulation; user data is always directly accessible. This provides high efficiency, and also retains the possibility of using message passing directly for highly irregular communications. Second, non-distributed arrays can be interpreted as (trivial) distributions in the Charon sense, which allows them to be mapped to truly distributed arrays, and vice versa. This is the mechanism that enables incremental parallelization. In this paper we provide a brief introduction of the library and then focus on the actual steps in the parallelization process, using some representative examples from, among others, the NAS Parallel Benchmarks. We show how a complicated two-dimensional pipeline-the prototypical non-data-parallel algorithm- can be constructed with ease. To demonstrate the flexibility of the library, we give examples of the stepwise, efficient parallel implementation of nonlocal boundary conditions common in aircraft simulations, as well as the construction of the sequence of grids required for multigrid.

  18. INDEX TO 16MM EDUCATIONAL FILMS.

    ERIC Educational Resources Information Center

    University of Southern California, Los Angeles. National Information Center for Educational Media.

    SIXTEEN MILLIMETER EDUCATIONAL FILMS ARE LISTED WITH TITLE, DESCRIPTION, TIME, COLOR/BLACK AND WHITE, PRODUCER CODE NAME, DISTRIBUTER CODE NAME, AND DATE OF PRODUCTION. FILMS ARE LISTED IN TWO WAYS--WITH TITLE ONLY BY SUBJECT IN A SUBJECT MATTER SECTION WHICH HAS AN OUTLINE AND INDEX, AND WITH ALL DATA IN A SECTION WHICH LISTS ALL FILMS…

  19. An integrated runtime and compile-time approach for parallelizing structured and block structured applications

    NASA Technical Reports Server (NTRS)

    Agrawal, Gagan; Sussman, Alan; Saltz, Joel

    1993-01-01

    Scientific and engineering applications often involve structured meshes. These meshes may be nested (for multigrid codes) and/or irregularly coupled (called multiblock or irregularly coupled regular mesh problems). A combined runtime and compile-time approach for parallelizing these applications on distributed memory parallel machines in an efficient and machine-independent fashion was described. A runtime library which can be used to port these applications on distributed memory machines was designed and implemented. The library is currently implemented on several different systems. To further ease the task of application programmers, methods were developed for integrating this runtime library with compilers for HPK-like parallel programming languages. How this runtime library was integrated with the Fortran 90D compiler being developed at Syracuse University is discussed. Experimental results to demonstrate the efficacy of our approach are presented. A multiblock Navier-Stokes solver template and a multigrid code were experimented with. Our experimental results show that our primitives have low runtime communication overheads. Further, the compiler parallelized codes perform within 20 percent of the code parallelized by manually inserting calls to the runtime library.

  20. A new approach to simulating collisionless dark matter fluids

    NASA Astrophysics Data System (ADS)

    Hahn, Oliver; Abel, Tom; Kaehler, Ralf

    2013-09-01

    Recently, we have shown how current cosmological N-body codes already follow the fine grained phase-space information of the dark matter fluid. Using a tetrahedral tessellation of the three-dimensional manifold that describes perfectly cold fluids in six-dimensional phase space, the phase-space distribution function can be followed throughout the simulation. This allows one to project the distribution function into configuration space to obtain highly accurate densities, velocities and velocity dispersions. Here, we exploit this technique to show first steps on how to devise an improved particle-mesh technique. At its heart, the new method thus relies on a piecewise linear approximation of the phase-space distribution function rather than the usual particle discretization. We use pseudo-particles that approximate the masses of the tetrahedral cells up to quadrupolar order as the locations for cloud-in-cell (CIC) deposit instead of the particle locations themselves as in standard CIC deposit. We demonstrate that this modification already gives much improved stability and more accurate dynamics of the collisionless dark matter fluid at high force and low mass resolution. We demonstrate the validity and advantages of this method with various test problems as well as hot/warm dark matter simulations which have been known to exhibit artificial fragmentation. This completely unphysical behaviour is much reduced in the new approach. The current limitations of our approach are discussed in detail and future improvements are outlined.

  1. Phase space effects on fast ion distribution function modeling in tokamaks

    DOE PAGES

    Podesta, M.; Gorelenkova, M.; Fredrickson, E. D.; ...

    2016-04-14

    Here, integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities,ad-hocmodels can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. Themore » kick model implemented in the tokamaktransport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.« less

  2. Nonlinear Real-Time Optical Signal Processing.

    DTIC Science & Technology

    1988-07-01

    Principal Investigator B. K. Jenkins Signal and Image Processing Institute University of Southern California Mail Code 0272 Los Angeles, California...ADDRESS (09% SteW. Mnd ZIP Code ) 10. SOURC OF FUNONG NUMBERS Bldg. 410, Bolling AFB PROGAM CT TASK WORK UNIT Washington, D.C. 20332 EEETP.aso o 11...TAB Unmnnncced Justification By Distribution/ I O’ Availablility Codes I - ’_ ji and/or 2 I Summary During the period 1 July 1987 - 30 June 1988, the

  3. Experimental study of non-binary LDPC coding for long-haul coherent optical QPSK transmissions.

    PubMed

    Zhang, Shaoliang; Arabaci, Murat; Yaman, Fatih; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Inada, Yoshihisa; Ogata, Takaaki; Aoki, Yasuhiro

    2011-09-26

    The performance of rate-0.8 4-ary LDPC code has been studied in a 50 GHz-spaced 40 Gb/s DWDM system with PDM-QPSK modulation. The net effective coding gain of 10 dB is obtained at BER of 10(-6). With the aid of time-interleaving polarization multiplexing and MAP detection, 10,560 km transmission over legacy dispersion managed fiber is achieved without any countable errors. The proposed nonbinary quasi-cyclic LDPC code achieves an uncoded BER threshold at 4×10(-2). Potential issues like phase ambiguity and coding length are also discussed when implementing LDPC in current coherent optical systems. © 2011 Optical Society of America

  4. Neutron resonance parameters of 6830Zn+n and statistical distributions of level spacings and widths

    NASA Astrophysics Data System (ADS)

    Garg, J. B.; Tikku, V. K.; Harvey, J. A.; Halperin, J.; Macklin, R. L.

    1982-04-01

    Discrete values of the parameters (E0, gΓn, Jπ, Γγ, etc.) of the resonances in the reaction 6830Zn + n have been determined from total cross section measurements from a few keV to 380 keV with a nominal resolution of 0.07 ns/m for the highest energy and from capture cross section measurements up to 130 keV using the pulsed neutron time-of-flight technique with a neutron burst width of 5 ns. The cross section data were analyzed to determine the parameters of the resonances using R-matrix multilevel codes. These results have provided values of average quantities as follows: S0=(2.01+/-0.34), S1=(0.56+/-0.05), S2=(0.2+/-0.1) in units of 10-4, D0=(5.56+/-0.43) keV and D1=(1.63+/-0.14) keV. From these measurements we have also determined the following average radiation widths: (Γ¯γ)l=0=(302+/-60) meV and (Γ¯γ)l=1=(157 +/-7) meV. The investigation of the statistical properties of neutron reduced widths and level spacings showed excellent agreement of the data with the Porter-Thomas distribution for s- and p-wave neutron widths and with the Dyson-Mehta Δ3 statistic and the Wigner distribution for the s-wave level spacing distribution. In addition, a correlation coefficient of ρ=0.50+/-0.10 between Γ0n and Γγ has been observed for s-wave resonances. The value of <σnγ> at (30+/-10) keV is 19.2 mb. NUCLEAR REACTIONS 3068Zn(n,n), 3068Zn(n,γ), E=few keV to 380, 130 keV, respectively. Measured total and capture cross sections versus neutron energy, deduced resonance parameters, E0, Jπ, gΓn, Γγ, S0, S1, S2, D0, D1.

  5. The RMI Space Weather and Navigation Systems (SWANS) Project

    NASA Astrophysics Data System (ADS)

    Warnant, Rene; Lejeune, Sandrine; Wautelet, Gilles; Spits, Justine; Stegen, Koen; Stankov, Stan

    The SWANS (Space Weather and Navigation Systems) research and development project (http://swans.meteo.be) is an initiative of the Royal Meteorological Institute (RMI) under the auspices of the Belgian Solar-Terrestrial Centre of Excellence (STCE). The RMI SWANS objectives are: research on space weather and its effects on GNSS applications; permanent mon-itoring of the local/regional geomagnetic and ionospheric activity; and development/operation of relevant nowcast, forecast, and alert services to help professional GNSS/GALILEO users in mitigating space weather effects. Several SWANS developments have already been implemented and available for use. The K-LOGIC (Local Operational Geomagnetic Index K Calculation) system is a nowcast system based on a fully automated computer procedure for real-time digital magnetogram data acquisition, data screening, and calculating the local geomagnetic K index. Simultaneously, the planetary Kp index is estimated from solar wind measurements, thus adding to the service reliability and providing forecast capabilities as well. A novel hybrid empirical model, based on these ground-and space-based observations, has been implemented for nowcasting and forecasting the geomagnetic index, issuing also alerts whenever storm-level activity is indicated. A very important feature of the nowcast/forecast system is the strict control on the data input and processing, allowing for an immediate assessment of the output quality. The purpose of the LIEDR (Local Ionospheric Electron Density Reconstruction) system is to acquire and process data from simultaneous ground-based GNSS TEC and digital ionosonde measurements, and subsequently to deduce the vertical electron density distribution. A key module is the real-time estimation of the ionospheric slab thickness, offering additional infor-mation on the local ionospheric dynamics. The RTK (Real Time Kinematic) status mapping provides a quick look at the small-scale ionospheric effects on the RTK precision for several GPS stations in Belgium. The service assesses the effect of small-scale ionospheric irregularities by monitoring the high-frequency TEC rate of change at any given station. This assessment results in a (colour) code assigned to each station, code ranging from "quiet" (green) to "extreme" (red) and referring to the local ionospheric conditions. Alerts via e-mail are sent to subscribed users when disturbed conditions are observed. SoDIPE (Software for Determining the Ionospheric Positioning Error) estimates the position-ing error due to the ionospheric conditions only (called "ionospheric error") in high-precision positioning applications (RTK in particular). For each of the Belgian Active Geodetic Network (AGN) baselines, SoDIPE computes the ionospheric error and its median value (every 15 min-utes). Again, a (colour) code is assigned to each baseline, ranging from "nominal" (green) to "extreme" (red) error level. Finally, all available baselines (drawn in colour corresponding to error level) are displayed on a map of Belgium. The future SWANS work will focus on regional ionospheric monitoring and developing various other nowcast and forecast services.

  6. Research on lossless compression of true color RGB image with low time and space complexity

    NASA Astrophysics Data System (ADS)

    Pan, ShuLin; Xie, ChengJun; Xu, Lin

    2008-12-01

    Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.

  7. Hybrid Vlasov simulations for alpha particles heating in the solar wind

    NASA Astrophysics Data System (ADS)

    Perrone, Denise; Valentini, Francesco; Veltri, Pierluigi

    2011-06-01

    Heating and acceleration of heavy ions in the solar wind and corona represent a long-standing theoretical problem in space physics and are distinct experimental signatures of kinetic processes occurring in collisionless plasmas. To address this problem, we propose the use of a low-noise hybrid-Vlasov code in four dimensional phase space (1D in physical space and 3D in velocity space) configuration. We trigger a turbulent cascade injecting the energy at large wavelengths and analyze the role of kinetic effects along the development of the energy spectra. Following the evolution of both proton and α distribution functions shows that both the ion species significantly depart from the maxwellian equilibrium, with the appearance of beams of accelerated particles in the direction parallel to the background magnetic field.

  8. Verification and Validation: High Charge and Energy (HZE) Transport Codes and Future Development

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Tripathi, Ram K.; Mertens, Christopher J.; Blattnig, Steve R.; Clowdsley, Martha S.; Cucinotta, Francis A.; Tweed, John; Heinbockel, John H.; Walker, Steven A.; Nealy, John E.

    2005-01-01

    In the present paper, we give the formalism for further developing a fully three-dimensional HZETRN code using marching procedures but also development of a new Green's function code is discussed. The final Green's function code is capable of not only validation in the space environment but also in ground based laboratories with directed beams of ions of specific energy and characterized with detailed diagnostic particle spectrometer devices. Special emphasis is given to verification of the computational procedures and validation of the resultant computational model using laboratory and spaceflight measurements. Due to historical requirements, two parallel development paths for computational model implementation using marching procedures and Green s function techniques are followed. A new version of the HZETRN code capable of simulating HZE ions with either laboratory or space boundary conditions is under development. Validation of computational models at this time is particularly important for President Bush s Initiative to develop infrastructure for human exploration with first target demonstration of the Crew Exploration Vehicle (CEV) in low Earth orbit in 2008.

  9. Cardiac Monitor

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Under contract to Johnson Space Center, the University of Minnesota developed the concept of impedance cardiography as an alternative to thermodilution to access astronaut heart function in flight. NASA then contracted Space Labs, Inc. to construct miniature space units based on this technology. Several companies then launched their own impedance cardiography, including Renaissance Technologies, which manufactures the IQ System. The IQ System is 5 to 17 times cheaper than thermodilution, and features the signal processing technology called TFD (Time Frequency Distribution). TFD provides three- dimensional distribution of the blood circulation force signals, allowing visualization of changes in power, frequency and time.

  10. Operationalizing the Space Weather Modeling Framework: Challenges and Resolutions

    NASA Astrophysics Data System (ADS)

    Welling, D. T.; Gombosi, T. I.; Toth, G.; Singer, H. J.; Millward, G. H.; Balch, C. C.; Cash, M. D.

    2016-12-01

    Predicting ground-based magnetic perturbations is a critical step towards specifying and predicting geomagnetically induced currents (GICs) in high voltage transmission lines. Currently, the Space Weather Modeling Framework (SWMF), a flexible modeling framework for simulating the multi-scale space environment, is being transitioned from research to operational use (R2O) by NOAA's Space Weather Prediction Center. Upon completion of this transition, the SWMF will provide localized time-varying magnetic field (dB/dt) predictions using real-time solar wind observations from L1 and the F10.7 proxy for EUV as model input. This presentation chronicles the challenges encountered during the R2O transition of the SWMF. Because operations relies on frequent calculations of global surface dB/dt, new optimizations were required to keep the model running faster than real time. Additionally, several singular situations arose during the 30-day robustness test that required immediate attention. Solutions and strategies for overcoming these issues will be presented. This includes new failsafe options for code execution, new physics and coupling parameters, and the development of an automated validation suite that allows us to monitor performance with code evolution. Finally, the operations-to-research (O2R) impact on SWMF-related research is presented. The lessons learned from this work are valuable and instructive for the space weather community as further R2O progress is made.

  11. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    PubMed

    Qi, Jin; Yang, Zhiyong

    2014-01-01

    Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.

  12. Monte Carlo simulation of X-ray imaging and spectroscopy experiments using quadric geometry and variance reduction techniques

    NASA Astrophysics Data System (ADS)

    Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca

    2014-03-01

    The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 83617 No. of bytes in distributed program, including test data, etc.: 1038160 Distribution format: tar.gz Programming language: C++. Computer: Tested on several PCs and on Mac. Operating system: Linux, Mac OS X, Windows (native and cygwin). RAM: It is dependent on the input data but usually between 1 and 10 MB. Classification: 2.5, 21.1. External routines: XrayLib (https://github.com/tschoonj/xraylib/wiki) Nature of problem: Simulation of a wide range of X-ray imaging and spectroscopy experiments using different types of sources and detectors. Solution method: XRMC is a versatile program that is useful for the simulation of a wide range of X-ray imaging and spectroscopy experiments. It enables the simulation of monochromatic and polychromatic X-ray sources, with unpolarised or partially/completely polarised radiation. Single-element detectors as well as two-dimensional pixel detectors can be used in the simulations, with several acquisition options. In the current version of the program, the sample is modelled by combining convex three-dimensional objects demarcated by quadric surfaces, such as planes, ellipsoids and cylinders. The Monte Carlo approach makes XRMC able to accurately simulate X-ray photon transport and interactions with matter up to any order of interaction. The differential cross-sections and all other quantities related to the interaction processes (photoelectric absorption, fluorescence emission, elastic and inelastic scattering) are computed using the xraylib software library, which is currently the most complete and up-to-date software library for X-ray parameters. The use of variance reduction techniques makes XRMC able to reduce the simulation time by several orders of magnitude compared to other general-purpose Monte Carlo simulation programs. Running time: It is dependent on the complexity of the simulation. For the examples distributed with the code, it ranges from less than 1 s to a few minutes.

  13. Space-to-Space Communications System

    NASA Technical Reports Server (NTRS)

    Tu, Kwei; Gaylor, Kent; Vitalpur, Sharada; Sham, Cathy

    1999-01-01

    The Space-to-Space Communications System (SSCS) is an Ultra High Frequency (UHF) Time-Division-Multiple Access (TDMA) system that is designed, developed, and deployed by the NASA Johnson Space Center (JSC) to provide voice, commands, telemetry and data services in close proximity among three space elements: International Space Station (ISS), Space Shuttle Orbiter, and Extravehicular Mobility Units (EMU). The SSCS consists of a family of three radios which are, Space-to-Space Station Radio (SSSR), Space-to-Space Orbiter Radio (SSOR), and Space-to-Space Extravehicular Mobility Radio (SSER). The SSCS can support up to five such radios at a time. Each user has its own time slot within which to transmit voice and data. Continuous Phase Frequency Shift Keying (CPFSK) carrier modulation with a burst data rate of 695 kbps and a frequency deviation of 486.5 kHz is employed by the system. Reed-Solomon (R-S) coding is also adopted to ensure data quality. In this paper, the SSCS system requirements, operational scenario, detailed system architecture and parameters, link acquisition strategy, and link performance analysis will be presented and discussed

  14. Real-space grids and the Octopus code as tools for the development of new simulation approaches for electronic systems

    NASA Astrophysics Data System (ADS)

    Andrade, Xavier; Strubbe, David; De Giovannini, Umberto; Larsen, Ask Hjorth; Oliveira, Micael J. T.; Alberdi-Rodriguez, Joseba; Varas, Alejandro; Theophilou, Iris; Helbig, Nicole; Verstraete, Matthieu J.; Stella, Lorenzo; Nogueira, Fernando; Aspuru-Guzik, Alán; Castro, Alberto; Marques, Miguel A. L.; Rubio, Angel

    Real-space grids are a powerful alternative for the simulation of electronic systems. One of the main advantages of the approach is the flexibility and simplicity of working directly in real space where the different fields are discretized on a grid, combined with competitive numerical performance and great potential for parallelization. These properties constitute a great advantage at the time of implementing and testing new physical models. Based on our experience with the Octopus code, in this article we discuss how the real-space approach has allowed for the recent development of new ideas for the simulation of electronic systems. Among these applications are approaches to calculate response properties, modeling of photoemission, optimal control of quantum systems, simulation of plasmonic systems, and the exact solution of the Schr\\"odinger equation for low-dimensionality systems.

  15. Live Ultra-High Definition from the International Space Station

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney; George, Sandy

    2017-01-01

    The first ever live downlink of Ultra-High Definition (UHD) video from the International Space Station (ISS) was the highlight of a 'Super Session' at the National Association of Broadcasters (NAB) in April 2017. The Ultra-High Definition video downlink from the ISS all the way to the Las Vegas Convention Center required considerable planning, pushed the limits of conventional video distribution from a space-craft, and was the first use of High Efficiency Video Coding (HEVC) from a space-craft. The live event at NAB will serve as a pathfinder for more routine downlinks of UHD as well as use of HEVC for conventional HD downlinks to save bandwidth. HEVC may also enable live Virtual Reality video downlinks from the ISS. This paper will describe the overall work flow and routing of the UHD video, how audio was synchronized even though the video and audio were received many seconds apart from each other, and how the demonstration paves the way for not only more efficient video distribution from the ISS, but also serves as a pathfinder for more complex video distribution from deep space. The paper will also describe how a 'live' event was staged when the UHD coming from the ISS had a latency of 10+ seconds. Finally, the paper will discuss how NASA is leveraging commercial technologies for use on-orbit vs. creating technology as was required during the Apollo Moon Program and early space age.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Ke; Zhang, Yanwen; Zhu, Zihua

    Accurate information of electronic stopping power is fundamental for broad advances in electronic industry, space exploration, national security, and sustainable energy technologies. The Stopping and Range of Ions in Matter (SRIM) code has been widely applied to predict stopping powers and ion distributions for decades. Recent experimental results have, however, shown considerable errors in the SRIM predictions for stopping of heavy ions in compounds containing light elements, indicating an urgent need to improve current stopping power models. The electronic stopping powers of 35Cl, 80Br, 127I, and 197Au ions are experimentally determined in two important functional materials, SiC and SiO2, frommore » tens to hundreds keV/u based on a single ion technique. By combining with the reciprocity theory, new electronic stopping powers are suggested in a region from 0 to 15 MeV, where large deviations from SRIM predictions are observed. For independent experimental validation of the electronic stopping powers we determined, Rutherford backscattering spectrometry (RBS) and secondary ion mass spectrometry (SIMS) are utilized to measure the depth profiles of implanted Au ions in SiC with energies from 700 keV to 15 MeV. The measured ion distributions from both RBS and SIMS are considerably deeper (up to ~30%) than the predictions from the commercial SRIM code. In comparison, the new electronic stopping power values are utilized in a modified TRIM-85 (the original version of the SRIM) code, M-TRIM, to predict ion distributions, and the results are in good agreement with the experimentally measured ion distributions.« less

  17. Is QR code an optimal data container in optical encryption systems from an error-correction coding perspective?

    PubMed

    Jiao, Shuming; Jin, Zhi; Zhou, Changyuan; Zou, Wenbin; Li, Xia

    2018-01-01

    Quick response (QR) code has been employed as a data carrier for optical cryptosystems in many recent research works, and the error-correction coding mechanism allows the decrypted result to be noise free. However, in this paper, we point out for the first time that the Reed-Solomon coding algorithm in QR code is not a very suitable option for the nonlocally distributed speckle noise in optical cryptosystems from an information coding perspective. The average channel capacity is proposed to measure the data storage capacity and noise-resistant capability of different encoding schemes. We design an alternative 2D barcode scheme based on Bose-Chaudhuri-Hocquenghem (BCH) coding, which demonstrates substantially better average channel capacity than QR code in numerical simulated optical cryptosystems.

  18. Secret information reconciliation based on punctured low-density parity-check codes for continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Jiang, Xue-Qin; Huang, Peng; Huang, Duan; Lin, Dakai; Zeng, Guihua

    2017-02-01

    Achieving information theoretic security with practical complexity is of great interest to continuous-variable quantum key distribution in the postprocessing procedure. In this paper, we propose a reconciliation scheme based on the punctured low-density parity-check (LDPC) codes. Compared to the well-known multidimensional reconciliation scheme, the present scheme has lower time complexity. Especially when the chosen punctured LDPC code achieves the Shannon capacity, the proposed reconciliation scheme can remove the information that has been leaked to an eavesdropper in the quantum transmission phase. Therefore, there is no information leaked to the eavesdropper after the reconciliation stage. This indicates that the privacy amplification algorithm of the postprocessing procedure is no more needed after the reconciliation process. These features lead to a higher secret key rate, optimal performance, and availability for the involved quantum key distribution scheme.

  19. Quantifying Low Energy Proton Damage in Multijunction Solar Cells

    NASA Technical Reports Server (NTRS)

    Messenger, Scott R.; Burke, Edward A.; Walters, Robert J.; Warner, Jeffrey H.; Summers, Geoffrey P.; Lorentzen, Justin R.; Morton, Thomas L.; Taylor, Steven J.

    2007-01-01

    An analysis of the effects of low energy proton irradiation on the electrical performance of triple junction (3J) InGaP2/GaAs/Ge solar cells is presented. The Monte Carlo ion transport code (SRIM) is used to simulate the damage profile induced in a 3J solar cell under the conditions of typical ground testing and that of the space environment. The results are used to present a quantitative analysis of the defect, and hence damage, distribution induced in the cell active region by the different radiation conditions. The modelling results show that, in the space environment, the solar cell will experience a uniform damage distribution through the active region of the cell. Through an application of the displacement damage dose analysis methodology, the implications of this result on mission performance predictions are investigated.

  20. Supporting shared data structures on distributed memory architectures

    NASA Technical Reports Server (NTRS)

    Koelbel, Charles; Mehrotra, Piyush; Vanrosendale, John

    1990-01-01

    Programming nonshared memory systems is more difficult than programming shared memory systems, since there is no support for shared data structures. Current programming languages for distributed memory architectures force the user to decompose all data structures into separate pieces, with each piece owned by one of the processors in the machine, and with all communication explicitly specified by low-level message-passing primitives. A new programming environment is presented for distributed memory architectures, providing a global name space and allowing direct access to remote parts of data values. The analysis and program transformations required to implement this environment are described, and the efficiency of the resulting code on the NCUBE/7 and IPSC/2 hypercubes are described.

  1. ArtDeco: a beam-deconvolution code for absolute cosmic microwave background measurements

    NASA Astrophysics Data System (ADS)

    Keihänen, E.; Reinecke, M.

    2012-12-01

    We present a method for beam-deconvolving cosmic microwave background (CMB) anisotropy measurements. The code takes as input the time-ordered data along with the corresponding detector pointings and known beam shapes, and produces as output the harmonic aTlm, aElm, and aBlm coefficients of the observed sky. From these one can derive temperature and Q and U polarisation maps. The method is applicable to absolute CMB measurements with wide sky coverage, and is independent of the scanning strategy. We tested the code with extensive simulations, mimicking the resolution and data volume of Planck 30 GHz and 70 GHz channels, but with exaggerated beam asymmetry. We applied it to multipoles up to l = 1700 and examined the results in both pixel space and harmonic space. We also tested the method in presence of white noise. The code is released under the terms of the GNU General Public License and can be obtained from http://sourceforge.net/projects/art-deco/

  2. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. We carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to both methods. The DOmore » method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.« less

  3. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    DOE PAGES

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.; ...

    2017-10-03

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. In this paper, we carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to bothmore » methods. The DO method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Finally, included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.« less

  4. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. In this paper, we carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to bothmore » methods. The DO method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Finally, included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.« less

  5. Shared acoustic codes underlie emotional communication in music and speech—Evidence from deep transfer learning

    PubMed Central

    Schuller, Björn

    2017-01-01

    Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence) in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech) and cross-domain experiments (i.e., models trained in one modality and tested on the other). In the cross-domain context, we evaluated two strategies—the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders) for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain. PMID:28658285

  6. Shared acoustic codes underlie emotional communication in music and speech-Evidence from deep transfer learning.

    PubMed

    Coutinho, Eduardo; Schuller, Björn

    2017-01-01

    Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence) in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech) and cross-domain experiments (i.e., models trained in one modality and tested on the other). In the cross-domain context, we evaluated two strategies-the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders) for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain.

  7. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.; Dolence, Joshua; Sumiyoshi, Kohsuke; Yamada, Shoichi

    2017-10-01

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. We carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to both methods. The DO method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.

  8. Inversion Analysis of Postseismic Deformation in Poroelastic Material Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Kawamoto, S.; Ito, T.; Hirahara, K.

    2005-12-01

    Following a large earthquake, postseismic deformations in the focal source region have been observed by several geodetic measurements. To explain the postseismic deformations, researchers have proposed some physical mechanisms known as afterslip, viscoelastic relaxation and poroelastic rebound. There are a number of studies about postseismic deformations but for poroelastic rebound. So, we calculated the postseismic deformations caused by afterslip and poroelastic rebound using modified FEM code _eCAMBIOT3D_f originally developed by Geotech. Lab. Gunma University, Japan (2003). The postseismic deformations caused by both afterslip and poroelastic rebound are characteristically different from those caused only by afterslip. This suggests that the slip distributions on the fault estimated from geodetic measurements also change. Because of this, we developed the inversion method that accounts for both afterslip and poroelastic rebound using FEM to estimate the difference of slip distributions on the fault quantitatively. The inversion analysis takes following steps. First, we calculate the coseismic and postseismic response functions on each fault segment induced by the unit slip. Where postseismic response function indicate the poroelastic rebound. Next, we make the observation equations at each time step using the response functions and estimate the spatiotemporal distribution of slip on the fault. In solving this inverse problem, we assume the slip distributions on the fault are smooth in space and time except for rapid change (coseismic change). Because the hyperparameters that control the smoothness of spatial and temporal distributions of slip are needed, we determine the best hyperparameters using ABIC. In this presentation, we introduce the example of analysis results using this method.

  9. DATASPACE - A PROGRAM FOR THE LOGARITHMIC INTERPOLATION OF TEST DATA

    NASA Technical Reports Server (NTRS)

    Ledbetter, F. E.

    1994-01-01

    Scientists and engineers work with the reduction, analysis, and manipulation of data. In many instances, the recorded data must meet certain requirements before standard numerical techniques may be used to interpret it. For example, the analysis of a linear visoelastic material requires knowledge of one of two time-dependent properties, the stress relaxation modulus E(t) or the creep compliance D(t), one of which may be derived from the other by a numerical method if the recorded data points are evenly spaced or increasingly spaced with respect to the time coordinate. The problem is that most laboratory data are variably spaced, making the use of numerical techniques difficult. To ease this difficulty in the case of stress relaxation data analysis, NASA scientists developed DATASPACE (A Program for the Logarithmic Interpolation of Test Data), to establish a logarithmically increasing time interval in the relaxation data. The program is generally applicable to any situation in which a data set needs increasingly spaced abscissa values. DATASPACE first takes the logarithm of the abscissa values, then uses a cubic spline interpolation routine (which minimizes interpolation error) to create an evenly spaced array from the log values. This array is returned from the log abscissa domain to the abscissa domain and written to an output file for further manipulation. As a result of the interpolation in the log abscissa domain, the data is increasingly spaced. In the case of stress relaxation data, the array is closely spaced at short times and widely spaced at long times, thus avoiding the distortion inherent in evenly spaced time coordinates. The interpolation routine gives results which compare favorably with the recorded data. The experimental data curve is retained and the interpolated points reflect the desired spacing. DATASPACE is written in FORTRAN 77 for IBM PC compatibles with a math co-processor running MS-DOS and Apple Macintosh computers running MacOS. With minor modifications the source code is portable to any platform that supports an ANSI FORTRAN 77 compiler. MicroSoft FORTRAN v2.1 is required for the Macintosh version. An executable is included with the PC version. DATASPACE is available on a 5.25 inch 360K MS-DOS format diskette (standard distribution) or on a 3.5 inch 800K Macintosh format diskette. This program was developed in 1991. IBM PC is a trademark of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation. Macintosh and MacOS are trademarks of Apple Computer, Inc.

  10. Space-Time Dependent Transport, Activation, and Dose Rates for Radioactivated Fluids.

    NASA Astrophysics Data System (ADS)

    Gavazza, Sergio

    Two methods are developed to calculate the space - and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates generated from the radioactivated fluids flowing through pipes. The work couples space- and time-dependent phenomena, treated as only space- or time-dependent in the open literature. The transport and activation methodology (TAM) is used to numerically calculate space- and time-dependent transport and activation of radionuclides in fluids flowing through pipes exposed to radiation fields, and volumetric radioactive sources created by radionuclide motions. The computer program Radionuclide Activation and Transport in Pipe (RNATPA1) performs the numerical calculations required in TAM. The gamma ray dose methodology (GAM) is used to numerically calculate space- and time-dependent gamma ray dose equivalent rates from the volumetric radioactive sources determined by TAM. The computer program Gamma Ray Dose Equivalent Rate (GRDOSER) performs the numerical calculations required in GAM. The scope of conditions considered by TAM and GAM herein include (a) laminar flow in straight pipe, (b)recirculating flow schemes, (c) time-independent fluid velocity distributions, (d) space-dependent monoenergetic neutron flux distribution, (e) space- and time-dependent activation process of a single parent nuclide and transport and decay of a single daughter radionuclide, and (f) assessment of space- and time-dependent gamma ray dose rates, outside the pipe, generated by the space- and time-dependent source term distributions inside of it. The methodologies, however, can be easily extended to include all the situations of interest for solving the phenomena addressed in this dissertation. A comparison is made from results obtained by the described calculational procedures with analytical expressions. The physics of the problems addressed by the new technique and the increased accuracy versus non -space and time-dependent methods are presented. The value of the methods is also discussed. It has been demonstrated that TAM and GAM can be used to enhance the understanding of the space- and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates related to radioactivated fluids flowing through pipes.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.

    Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less

  12. Automated Detection and Analysis of Interplanetary Shocks Running Real-Time on the Web

    NASA Astrophysics Data System (ADS)

    Vorotnikov, V.; Smith, C. W.; Hu, Q.; Szabo, A.; Skoug, R. M.; Cohen, C. M.; Davis, A. J.

    2008-05-01

    The ACE real-time data stream provides web-based now-casting capabilities for solar wind conditions upstream of Earth. We have built a fully automated code that finds and analyzes interplanetary shocks as they occur and posts their solutions on the Web for possible real-time application to space weather nowcasting. Shock analysis algorithms based on the Rankine-Hugoniot jump conditions exist and are in wide-spread use today for the interactive analysis of interplanetary shocks yielding parameters such as shock speed and propagation direction and shock strength in the form of compression ratios. At a previous meeting we reported on efforts to develop a fully automated code that used ACE Level-2 (science quality) data to prove the applicability and correctness of the code and the associated shock-finder. We have since adapted the code to run ACE RTSW data provided by NOAA. This data lacks the full 3-dimensional velocity vector for the solar wind and contains only a single component wind speed. We show that by assuming the wind velocity to be radial strong shock solutions remain essentially unchanged and the analysis performs as well as it would if 3-D velocity components were available. This is due, at least in part, to the fact that strong shocks tend to have nearly radial shock normals and it is the strong shocks that are most effective in space weather applications. Strong shocks are the only shocks that concern us in this application. The code is now running on the Web and the results are available to all.

  13. Higher-order accurate space-time schemes for computational astrophysics—Part I: finite volume methods

    NASA Astrophysics Data System (ADS)

    Balsara, Dinshaw S.

    2017-12-01

    As computational astrophysics comes under pressure to become a precision science, there is an increasing need to move to high accuracy schemes for computational astrophysics. The algorithmic needs of computational astrophysics are indeed very special. The methods need to be robust and preserve the positivity of density and pressure. Relativistic flows should remain sub-luminal. These requirements place additional pressures on a computational astrophysics code, which are usually not felt by a traditional fluid dynamics code. Hence the need for a specialized review. The focus here is on weighted essentially non-oscillatory (WENO) schemes, discontinuous Galerkin (DG) schemes and PNPM schemes. WENO schemes are higher order extensions of traditional second order finite volume schemes. At third order, they are most similar to piecewise parabolic method schemes, which are also included. DG schemes evolve all the moments of the solution, with the result that they are more accurate than WENO schemes. PNPM schemes occupy a compromise position between WENO and DG schemes. They evolve an Nth order spatial polynomial, while reconstructing higher order terms up to Mth order. As a result, the timestep can be larger. Time-dependent astrophysical codes need to be accurate in space and time with the result that the spatial and temporal accuracies must be matched. This is realized with the help of strong stability preserving Runge-Kutta schemes and ADER (Arbitrary DERivative in space and time) schemes, both of which are also described. The emphasis of this review is on computer-implementable ideas, not necessarily on the underlying theory.

  14. Phonological coding during reading.

    PubMed

    Leinenger, Mallorie

    2014-11-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  15. Phonological coding during reading

    PubMed Central

    Leinenger, Mallorie

    2014-01-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679

  16. Microgravity Materials Research and Code U ISRU

    NASA Technical Reports Server (NTRS)

    Curreri, Peter A.; Sibille, Laurent

    2004-01-01

    The NASA microgravity research program, simply put, has the goal of doing science (which is essentially finding out something previously unknown about nature) utilizing the unique long-term microgravity environment in Earth orbit. Since 1997 Code U has in addition funded scientific basic research that enables safe and economical capabilities to enable humans to live, work and do science beyond Earth orbit. This research has been integrated with the larger NASA missions (Code M and S). These new exploration research focus areas include Radiation Shielding Materials, Macromolecular Research on Bone and Muscle Loss, In Space Fabrication and Repair, and Low Gravity ISRU. The latter two focus on enabling materials processing in space for use in space. The goal of this program is to provide scientific and technical research resulting in proof-of-concept experiments feeding into the larger NASA program to provide humans in space with an energy rich, resource rich, self sustaining infrastructure at the earliest possible time and with minimum risk, launch mass and program cost. President Bush's Exploration Vision (1/14/04) gives a new urgency for the development of ISRU concepts into the exploration architecture. This will require an accelerated One NASA approach utilizing NASA's partners in academia, and industry.

  17. The Magnetic Reconnection Code: an AMR-based fully implicit simulation suite

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Bhattacharjee, A.; Ng, C.-S.

    2006-12-01

    Extended MHD models, which incorporate two-fluid effects, are promising candidates to enhance understanding of collisionless reconnection phenomena in laboratory, space and astrophysical plasma physics. In this paper, we introduce two simulation codes in the Magnetic Reconnection Code suite which integrate reduced and full extended MHD models. Numerical integration of these models comes with two challenges: Small-scale spatial structures, e.g. thin current sheets, develop and must be well resolved by the code. Adaptive mesh refinement (AMR) is employed to provide high resolution where needed while maintaining good performance. Secondly, the two-fluid effects in extended MHD give rise to dispersive waves, which lead to a very stringent CFL condition for explicit codes, while reconnection happens on a much slower time scale. We use a fully implicit Crank--Nicholson time stepping algorithm. Since no efficient preconditioners are available for our system of equations, we instead use a direct solver to handle the inner linear solves. This requires us to actually compute the Jacobian matrix, which is handled by a code generator that calculates the derivative symbolically and then outputs code to calculate it.

  18. Pseudorandom Noise Code-Based Technique for Cloud and Aerosol Discrimination Applications

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.; Prasad, Narasimha S.; Flood, Michael A.; Harrison, Fenton Wallace

    2011-01-01

    NASA Langley Research Center is working on a continuous wave (CW) laser based remote sensing scheme for the detection of CO2 and O2 from space based platforms suitable for ACTIVE SENSING OF CO2 EMISSIONS OVER NIGHTS, DAYS, AND SEASONS (ASCENDS) mission. ASCENDS is a future space-based mission to determine the global distribution of sources and sinks of atmospheric carbon dioxide (CO2). A unique, multi-frequency, intensity modulated CW (IMCW) laser absorption spectrometer (LAS) operating at 1.57 micron for CO2 sensing has been developed. Effective aerosol and cloud discrimination techniques are being investigated in order to determine concentration values with accuracies less than 0.3%. In this paper, we discuss the demonstration of a PN code based technique for cloud and aerosol discrimination applications. The possibility of using maximum length (ML)-sequences for range and absorption measurements is investigated. A simple model for accomplishing this objective is formulated, Proof-of-concept experiments carried out using SONAR based LIDAR simulator that was built using simple audio hardware provided promising results for extension into optical wavelengths. Keywords: ASCENDS, CO2 sensing, O2 sensing, PN codes, CW lidar

  19. Proton Lateral Broadening Distribution Comparisons Between GRNTRN, MCNPX, and Laboratory Beam Measurements

    NASA Technical Reports Server (NTRS)

    Mertens, Christopher J.; Moyers, Michael F.; Walker, Steven A.; Tweed, John

    2010-01-01

    Recent developments in NASA s deterministic High charge (Z) and Energy TRaNsport (HZETRN) code have included lateral broadening of primary ion beams due to small-angle multiple Coulomb scattering, and coupling of the ion-nuclear scattering interactions with energy loss and straggling. This new version of HZETRN is based on Green function methods, called GRNTRN, and is suitable for modeling transport with both space environment and laboratory boundary conditions. Multiple scattering processes are a necessary extension to GRNTRN in order to accurately model ion beam experiments, to simulate the physical and biological-effective radiation dose, and to develop new methods and strategies for light ion radiation therapy. In this paper we compare GRNTRN simulations of proton lateral broadening distributions with beam measurements taken at Loma Linda University Proton Therapy Facility. The simulated and measured lateral broadening distributions are compared for a 250 MeV proton beam on aluminum, polyethylene, polystyrene, bone substitute, iron, and lead target materials. The GRNTRN results are also compared to simulations from the Monte Carlo MCNPX code for the same projectile-target combinations described above.

  20. Analysis of thrips distribution: application of spatial statistics and Kriging

    Treesearch

    John Aleong; Bruce L. Parker; Margaret Skinner; Diantha Howard

    1991-01-01

    Kriging is a statistical technique that provides predictions for spatially and temporally correlated data. Observations of thrips distribution and density in Vermont soils are made in both space and time. Traditional statistical analysis of such data assumes that the counts taken over space and time are independent, which is not necessarily true. Therefore, to analyze...

  1. Testability, Test Automation and Test Driven Development for the Trick Simulation Toolkit

    NASA Technical Reports Server (NTRS)

    Penn, John

    2014-01-01

    This paper describes the adoption of a Test Driven Development approach and a Continuous Integration System in the development of the Trick Simulation Toolkit, a generic simulation development environment for creating high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. It describes the approach, and the significant benefits seen, such as fast, thorough and clear test feedback every time code is checked into the code repository. It also describes an approach that encourages development of code that is testable and adaptable.

  2. CFD Simulation on the J-2X Engine Exhaust in the Center-Body Diffuser and Spray Chamber at the B-2 Facility

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen; Wey, Thomas; Buehrle, Robert

    2009-01-01

    A computational fluid dynamic (CFD) code is used to simulate the J-2X engine exhaust in the center-body diffuser and spray chamber at the Spacecraft Propulsion Facility (B-2). The CFD code is named as the space-time conservation element and solution element (CESE) Euler solver and is very robust at shock capturing. The CESE results are compared with independent analysis results obtained by using the National Combustion Code (NCC) and show excellent agreement.

  3. Verbal-spatial and visuospatial coding of power-space interactions.

    PubMed

    Dai, Qiang; Zhu, Lei

    2018-05-10

    A power-space interaction, which denotes the phenomenon that people responded faster to powerful words when they are placed higher in a visual field and faster to powerless words when they are placed lower in a visual field, has been repeatedly found. The dominant explanation of this power-space interaction is that it results from a tight correspondence between the representation of power and visual space (i.e., a visuospatial coding account). In the present study, we demonstrated that the interaction between power and space could be also based on a verbal-spatial coding in absence of any vertical spatial information. Additionally, the verbal-spatial coding was dominant in driving the power-space interaction when verbal space was contrasted with the visual space. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchill, R. Michael

    Apache Spark is explored as a tool for analyzing large data sets from the magnetic fusion simulation code XGCI. Implementation details of Apache Spark on the NERSC Edison supercomputer are discussed, including binary file reading, and parameter setup. Here, an unsupervised machine learning algorithm, k-means clustering, is applied to XGCI particle distribution function data, showing that highly turbulent spatial regions do not have common coherent structures, but rather broad, ring-like structures in velocity space.

  5. Software Design Description for the Tidal Open-boundary Prediction System (TOPS)

    DTIC Science & Technology

    2010-05-04

    Naval Research Laboratory Stennis Space Center, MS 39529-5004 NRL/MR/7320--10-9209 Approved for public release; distribution is unlimited. Software ...Department of Defense, Washington Headquarters Services , Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite...RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (include area code) b. ABSTRACT c. THIS PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT Software Design

  6. Drinking Water Residence Time in Distribution Networks and Emergency Department Visits for Gastrointestinal Illness in Metro Atlanta, Georgia

    PubMed Central

    Moe, Christine L.; Klein, Mitchel; Flanders, W. Dana; Uber, Jim; Amirtharajah, Appiah; Singer, Philip; Tolbert, Paige E.

    2013-01-01

    We examined whether the average water residence time, the time it takes water to travel from the treatment plant to the user, for a zip code was related to the proportion of emergency department (ED) visits for gastrointestinal (GI) illness among residents of that zip code. Individual-level ED data were collected from all hospitals located in the five-county metro Atlanta area from 1993 to 2004. Two of the largest water utilities in the area, together serving 1.7 million people, were considered. People served by these utilities had almost three million total ED visits, 164,937 of them for GI illness. The relationship between water residence time and risk for GI illness was assessed using logistic regression, controlling for potential confounding factors, including patient age and markers of socioeconomic status (SES). We observed a modestly increased risk for GI illness for residents of zip codes with the longest water residence times compared to intermediate residence times (odds ratio (OR) for Utility 1 = 1.07, 95% confidence interval (CI) = 1.03, 1.10; OR for Utility 2 = 1.05, 95% CI = 1.02, 1.08). The results suggest that drinking water contamination in the distribution system may contribute to the burden of endemic GI illness. PMID:19240359

  7. Drinking water residence time in distribution networks and emergency department visits for gastrointestinal illness in Metro Atlanta, Georgia.

    PubMed

    Tinker, Sarah C; Moe, Christine L; Klein, Mitchel; Flanders, W Dana; Uber, Jim; Amirtharajah, Appiah; Singer, Philip; Tolbert, Paige E

    2009-06-01

    We examined whether the average water residence time, the time it takes water to travel from the treatment plant to the user, for a zip code was related to the proportion of emergency department (ED) visits for gastrointestinal (GI) illness among residents of that zip code. Individual-level ED data were collected from all hospitals located in the five-county metro Atlanta area from 1993 to 2004. Two of the largest water utilities in the area, together serving 1.7 million people, were considered. People served by these utilities had almost 3 million total ED visits, 164,937 of them for GI illness. The relationship between water residence time and risk for GI illness was assessed using logistic regression, controlling for potential confounding factors, including patient age and markers of socioeconomic status (SES). We observed a modestly increased risk for GI illness for residents of zip codes with the longest water residence times compared with intermediate residence times (odds ratio (OR) for Utility 1 = 1.07, 95% confidence interval (CI) = 1.03, 1.10; OR for Utility 2 = 1.05, 95% CI = 1.02, 1.08). The results suggest that drinking water contamination in the distribution system may contribute to the burden of endemic GI illness.

  8. Design the RS(255,239) encoder and interleaving in the space laser communication system

    NASA Astrophysics Data System (ADS)

    Lang, Yue; Tong, Shou-feng

    2013-08-01

    Space laser communication is researched by more and more countries. Space laser communication deserves to be researched. We can acquire higher transmission speed and better transmission quality between satellite and satellite, satellite and earth by setting up laser link. But in the space laser communication system,the reliability is under influences of many factors of atmosphere,detector noise, optical platform jitter and other factors. The intensity of the signal which is attenuated because of the long transmission distance is demanded to have higher intensity to acquire low BER. The channel code technology can enhance the anti-interference ability of the system. The theory of channel coding technology is that some redundancies is added to information codes. So it can make use of the checkout polynomial to correct errors at the sink port. It help the system to get low BER rate and coding gain. Reed-Solomon (RS) code is one of the channel code, and it is one kind of multi-ary BCH code, and it can correct both burst errors and random errors, and it is widely used in the error-control schemes. The new method of the RS encoder and interleaving based on the FPGA is proposed, aiming at satisfying the needs of the widely-used error control technology in the space laser communication field. An improved method for Finite Galois Field multiplier of encoding is proposed, and it is suitable for FPGA implementation. Comparison of the XOR gates cost between the optimization and original, the number of XOR gates is lessen more than 40% .Then give a new structure of interleaving by using the FPGA. By controlling the in-data stream and out-data stream of encoder, the asynchronous process of the whole frame is accomplished, while by using multi-level pipeline, the real-time transfer of the data is achieved. By controlling the read-address and write-address of the block RAM, the interleaving operation of the arbitrary depth is synchronously implemented. Compared with the normal method, it could reduce the complexity of the channel encoder and the hardware requirement effectively.

  9. Strategy Guideline: Compact Air Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burdick, A.

    2013-06-01

    This Strategy Guideline discusses the benefits and challenges of using a compact air distribution system to handle the reduced loads and reduced air volume needed to condition the space within an energy efficient home. Traditional systems sized by 'rule of thumb' (i.e., 1 ton of cooling per 400 ft2 of floor space) that 'wash' the exterior walls with conditioned air from floor registers cannot provide appropriate air mixing and moisture removal in low-load homes. A compact air distribution system locates the HVAC equipment centrally with shorter ducts run to interior walls, and ceiling supply outlets throw the air toward themore » exterior walls along the ceiling plane; alternatively, high sidewall supply outlets throw the air toward the exterior walls. Potential drawbacks include resistance from installing contractors or code officials who are unfamiliar with compact air distribution systems, as well as a lack of availability of low-cost high sidewall or ceiling supply outlets to meet the low air volumes with good throw characteristics. The decision criteria for a compact air distribution system must be determined early in the whole-house design process, considering both supply and return air design. However, careful installation of a compact air distribution system can result in lower material costs from smaller equipment, shorter duct runs, and fewer outlets; increased installation efficiencies, including ease of fitting the system into conditioned space; lower loads on a better balanced HVAC system, and overall improved energy efficiency of the home.« less

  10. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  11. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  12. A comparison of KABCO and AIS injury severity metrics using CODES linked data.

    PubMed

    Burch, Cynthia; Cook, Lawrence; Dischinger, Patricia

    2014-01-01

    The research objective is to compare the consistency of distributions between crash assigned (KABCO) and hospital assigned (Abbreviated Injury Scale, AIS) injury severity scoring systems for 2 states. The hypothesis is that AIS scores will be more consistent between the 2 studied states (Maryland and Utah) than KABCO. The analysis involved Crash Outcome Data Evaluation System (CODES) data from 2 states, Maryland and Utah, for years 2006-2008. Crash report and hospital inpatient data were linked probabilistically and International Classification of Diseases (CMS 2013) codes from hospital records were translated into AIS codes. KABCO scores from police crash reports were compared to those AIS scores within and between the 2 study states. Maryland appears to have the more severe crash report KABCO scoring for injured crash participants, with close to 50 percent of all injured persons being coded as a level B or worse, and Utah observes approximately 40 percent in this group. When analyzing AIS scores, some fluctuation was seen within states over time, but the distribution of MAIS is much more comparable between states. Maryland had approximately 85 percent of hospitalized injured cases coded as MAIS = 1 or minor. In Utah this percentage was close to 80 percent for all 3 years. This is quite different from the KABCO distributions, where Maryland had a smaller percentage of cases in the lowest injury severity category as compared to Utah. This analysis examines the distribution of 2 injury severity metrics different in both design and collection and found that both classifications are consistent within each state from 2006 to 2008. However, the distribution of both KABCO and Maximum Abbreviated Injury Scale (MAIS) varies between the states. MAIS was found to be more consistent between states than KABCO.

  13. Design of the Experimental Exposure Conditions to Simulate Ionizing Radiation Effects on Candidate Replacement Materials for the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Smith, L. Montgomery

    1998-01-01

    In this effort, experimental exposure times for monoenergetic electrons and protons were determined to simulate the space radiation environment effects on Teflon components of the Hubble Space Telescope. Although the energy range of the available laboratory particle accelerators was limited, optimal exposure times for 50 keV, 220 keV, 350 keV, and 500 KeV electrons were calculated that produced a dose-versus-depth profile that approximated the full spectrum profile, and were realizable with existing equipment. For the case of proton exposure, the limited energy range of the laboratory accelerator restricted simulation of the dose to a depth of .5 mil. Also, while optimal exposure times were found for 200 keV, 500 keV and 700 keV protons that simulated the full spectrum dose-versus-depth profile to this depth, they were of such short duration that the existing laboratory could not be controlled to within the required accuracy. In addition to the obvious experimental issues, other areas exist in which the analytical work could be advanced. Improved computer codes for the dose prediction- along with improved methodology for data input and output- would accelerate and make more accurate the calculational aspects. This is particularly true in the case of proton fluxes where a paucity of available predictive software appears to exist. The dated nature of many of the existing Monte Carlo particle/radiation transport codes raises the issue as to whether existing codes are sufficient for this type of analysis. Other areas that would result in greater fidelity of laboratory exposure effects to the space environment is the use of a larger number of monoenergetic particle fluxes and improved optimization algorithms to determine the weighting values.

  14. Engineering High Assurance Distributed Cyber Physical Systems

    DTIC Science & Technology

    2015-01-15

    decisions: number of interacting agents and co-dependent decisions made in real-time without causing interference . To engineer a high assurance DART...environment specification, architecture definition, domain-specific languages, design patterns, code - generation, analysis, test-generation, and simulation...include synchronization between the models and source code , debugging at the model level, expression of the design intent, and quality of service

  15. An Experiment in Scientific Code Semantic Analysis

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.

    1998-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, distributed expert parsers. These semantic parser are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. The parsers will automatically recognize and document some static, semantic concepts and locate some program semantic errors. Results are shown for a subroutine test case and a collection of combustion code routines. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.

  16. mGrid: A load-balanced distributed computing environment for the remote execution of the user-defined Matlab code

    PubMed Central

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-01-01

    Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet. PMID:16539707

  17. mGrid: a load-balanced distributed computing environment for the remote execution of the user-defined Matlab code.

    PubMed

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-03-15

    Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet.

  18. Neoclassical Simulation of Tokamak Plasmas using Continuum Gyrokinetc Code TEMPEST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, X Q

    We present gyrokinetic neoclassical simulations of tokamak plasmas with self-consistent electric field for the first time using a fully nonlinear (full-f) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five dimensional computational grid in phase space. The present implementation is a Method of Lines approach where the phase-space derivatives are discretized with finite differences and implicit backwards differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving gyrokinetic Poisson equation with self-consistent poloidal variation. Withmore » our 4D ({psi}, {theta}, {epsilon}, {mu}) version of the TEMPEST code we compute radial particle and heat flux, the Geodesic-Acoustic Mode (GAM), and the development of neoclassical electric field, which we compare with neoclassical theory with a Lorentz collision model. The present work provides a numerical scheme and a new capability for self-consistently studying important aspects of neoclassical transport and rotations in toroidal magnetic fusion devices.« less

  19. Quantum computing with Majorana fermion codes

    NASA Astrophysics Data System (ADS)

    Litinski, Daniel; von Oppen, Felix

    2018-05-01

    We establish a unified framework for Majorana-based fault-tolerant quantum computation with Majorana surface codes and Majorana color codes. All logical Clifford gates are implemented with zero-time overhead. This is done by introducing a protocol for Pauli product measurements with tetrons and hexons which only requires local 4-Majorana parity measurements. An analogous protocol is used in the fault-tolerant setting, where tetrons and hexons are replaced by Majorana surface code patches, and parity measurements are replaced by lattice surgery, still only requiring local few-Majorana parity measurements. To this end, we discuss twist defects in Majorana fermion surface codes and adapt the technique of twist-based lattice surgery to fermionic codes. Moreover, we propose a family of codes that we refer to as Majorana color codes, which are obtained by concatenating Majorana surface codes with small Majorana fermion codes. Majorana surface and color codes can be used to decrease the space overhead and stabilizer weight compared to their bosonic counterparts.

  20. On the Finite Element Implementation of the Generalized Method of Cells Micromechanics Constitutive Model

    NASA Technical Reports Server (NTRS)

    Wilt, T. E.

    1995-01-01

    The Generalized Method of Cells (GMC), a micromechanics based constitutive model, is implemented into the finite element code MARC using the user subroutine HYPELA. Comparisons in terms of transverse deformation response, micro stress and strain distributions, and required CPU time are presented for GMC and finite element models of fiber/matrix unit cell. GMC is shown to provide comparable predictions of the composite behavior and requires significantly less CPU time as compared to a finite element analysis of the unit cell. Details as to the organization of the HYPELA code are provided with the actual HYPELA code included in the appendix.

  1. A Deterministic Computational Procedure for Space Environment Electron Transport

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamcyk, Anne M.

    2010-01-01

    A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.

  2. Mapping visual stimuli to perceptual decisions via sparse decoding of mesoscopic neural activity.

    PubMed

    Sajda, Paul

    2010-01-01

    In this talk I will describe our work investigating sparse decoding of neural activity, given a realistic mapping of the visual scene to neuronal spike trains generated by a model of primary visual cortex (V1). We use a linear decoder which imposes sparsity via an L1 norm. The decoder can be viewed as a decoding neuron (linear summation followed by a sigmoidal nonlinearity) in which there are relatively few non-zero synaptic weights. We find: (1) the best decoding performance is for a representation that is sparse in both space and time, (2) decoding of a temporal code results in better performance than a rate code and is also a better fit to the psychophysical data, (3) the number of neurons required for decoding increases monotonically as signal-to-noise in the stimulus decreases, with as little as 1% of the neurons required for decoding at the highest signal-to-noise levels, and (4) sparse decoding results in a more accurate decoding of the stimulus and is a better fit to psychophysical performance than a distributed decoding, for example one imposed by an L2 norm. We conclude that sparse coding is well-justified from a decoding perspective in that it results in a minimum number of neurons and maximum accuracy when sparse representations can be decoded from the neural dynamics.

  3. High-Performance Design Patterns for Modern Fortran

    DOE PAGES

    Haveraaen, Magne; Morris, Karla; Rouson, Damian; ...

    2015-01-01

    This paper presents ideas for using coordinate-free numerics in modern Fortran to achieve code flexibility in the partial differential equation (PDE) domain. We also show how Fortran, over the last few decades, has changed to become a language well-suited for state-of-the-art software development. Fortran’s new coarray distributed data structure, the language’s class mechanism, and its side-effect-free, pure procedure capability provide the scaffolding on which we implement HPC software. These features empower compilers to organize parallel computations with efficient communication. We present some programming patterns that support asynchronous evaluation of expressions comprised of parallel operations on distributed data. We implemented thesemore » patterns using coarrays and the message passing interface (MPI). We compared the codes’ complexity and performance. The MPI code is much more complex and depends on external libraries. The MPI code on Cray hardware using the Cray compiler is 1.5–2 times faster than the coarray code on the same hardware. The Intel compiler implements coarrays atop Intel’s MPI library with the result apparently being 2–2.5 times slower than manually coded MPI despite exhibiting nearly linear scaling efficiency. As compilers mature and further improvements to coarrays comes in Fortran 2015, we expect this performance gap to narrow.« less

  4. Final Technical Report for SBIR entitled Four-Dimensional Finite-Orbit-Width Fokker-Planck Code with Sources, for Neoclassical/Anomalous Transport Simulation of Ion and Electron Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey, R. W.; Petrov, Yu. V.

    2013-12-03

    Within the US Department of Energy/Office of Fusion Energy magnetic fusion research program, there is an important whole-plasma-modeling need for a radio-frequency/neutral-beam-injection (RF/NBI) transport-oriented finite-difference Fokker-Planck (FP) code with combined capabilities for 4D (2R2V) geometry near the fusion plasma periphery, and computationally less demanding 3D (1R2V) bounce-averaged capabilities for plasma in the core of fusion devices. Demonstration of proof-of-principle achievement of this goal has been carried out in research carried out under Phase I of the SBIR award. Two DOE-sponsored codes, the CQL3D bounce-average Fokker-Planck code in which CompX has specialized, and the COGENT 4D, plasma edge-oriented Fokker-Planck code whichmore » has been constructed by Lawrence Livermore National Laboratory and Lawrence Berkeley Laboratory scientists, where coupled. Coupling was achieved by using CQL3D calculated velocity distributions including an energetic tail resulting from NBI, as boundary conditions for the COGENT code over the two-dimensional velocity space on a spatial interface (flux) surface at a given radius near the plasma periphery. The finite-orbit-width fast ions from the CQL3D distributions penetrated into the peripheral plasma modeled by the COGENT code. This combined code demonstrates the feasibility of the proposed 3D/4D code. By combining these codes, the greatest computational efficiency is achieved subject to present modeling needs in toroidally symmetric magnetic fusion devices. The more efficient 3D code can be used in its regions of applicability, coupled to the more computationally demanding 4D code in higher collisionality edge plasma regions where that extended capability is necessary for accurate representation of the plasma. More efficient code leads to greater use and utility of the model. An ancillary aim of the project is to make the combined 3D/4D code user friendly. Achievement of full-coupling of these two Fokker-Planck codes will advance computational modeling of plasma devices important to the USDOE magnetic fusion energy program, in particular the DIII-D tokamak at General Atomics, San Diego, the NSTX spherical tokamak at Princeton, New Jersey, and the MST reversed-field-pinch Madison, Wisconsin. The validation studies of the code against the experiments will improve understanding of physics important for magnetic fusion, and will increase our design capabilities for achieving the goals of the International Tokamak Experimental Reactor (ITER) project in which the US is a participant and which seeks to demonstrate at least a factor of five in fusion power production divided by input power.« less

  5. Spatiotemporal Analysis of the Ebola Hemorrhagic Fever in West Africa in 2014

    NASA Astrophysics Data System (ADS)

    Xu, M.; Cao, C. X.; Guo, H. F.

    2017-09-01

    Ebola hemorrhagic fever (EHF) is an acute hemorrhagic diseases caused by the Ebola virus, which is highly contagious. This paper aimed to explore the possible gathering area of EHF cases in West Africa in 2014, and identify endemic areas and their tendency by means of time-space analysis. We mapped distribution of EHF incidences and explored statistically significant space, time and space-time disease clusters. We utilized hotspot analysis to find the spatial clustering pattern on the basis of the actual outbreak cases. spatial-temporal cluster analysis is used to analyze the spatial or temporal distribution of agglomeration disease, examine whether its distribution is statistically significant. Local clusters were investigated using Kulldorff's scan statistic approach. The result reveals that the epidemic mainly gathered in the western part of Africa near north Atlantic with obvious regional distribution. For the current epidemic, we have found areas in high incidence of EVD by means of spatial cluster analysis.

  6. VERSE - Virtual Equivalent Real-time Simulation

    NASA Technical Reports Server (NTRS)

    Zheng, Yang; Martin, Bryan J.; Villaume, Nathaniel

    2005-01-01

    Distributed real-time simulations provide important timing validation and hardware in the- loop results for the spacecraft flight software development cycle. Occasionally, the need for higher fidelity modeling and more comprehensive debugging capabilities - combined with a limited amount of computational resources - calls for a non real-time simulation environment that mimics the real-time environment. By creating a non real-time environment that accommodates simulations and flight software designed for a multi-CPU real-time system, we can save development time, cut mission costs, and reduce the likelihood of errors. This paper presents such a solution: Virtual Equivalent Real-time Simulation Environment (VERSE). VERSE turns the real-time operating system RTAI (Real-time Application Interface) into an event driven simulator that runs in virtual real time. Designed to keep the original RTAI architecture as intact as possible, and therefore inheriting RTAI's many capabilities, VERSE was implemented with remarkably little change to the RTAI source code. This small footprint together with use of the same API allows users to easily run the same application in both real-time and virtual time environments. VERSE has been used to build a workstation testbed for NASA's Space Interferometry Mission (SIM PlanetQuest) instrument flight software. With its flexible simulation controls and inexpensive setup and replication costs, VERSE will become an invaluable tool in future mission development.

  7. SU-E-T-121: Analyzing the Broadening Effect On the Bragg Peak Due to Heterogeneous Geometries and Implementing User-Routines in the Monte-Carlo Code FLUKA in Order to Reduce Computation Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumann, K; Weber, U; Simeonov, Y

    2015-06-15

    Purpose: Aim of this study was to analyze the modulating, broadening effect on the Bragg Peak due to heterogeneous geometries like multi-wire chambers in the beam path of a particle therapy beam line. The effect was described by a mathematical model which was implemented in the Monte-Carlo code FLUKA via user-routines, in order to reduce the computation time for the simulations. Methods: The depth dose curve of 80 MeV/u C12-ions in a water phantom was calculated using the Monte-Carlo code FLUKA (reference curve). The modulating effect on this dose distribution behind eleven mesh-like foils (periodicity ∼80 microns) occurring in amore » typical set of multi-wire and dose chambers was mathematically described by optimizing a normal distribution so that the reverence curve convoluted with this distribution equals the modulated dose curve. This distribution describes a displacement in water and was transferred in a probability distribution of the thickness of the eleven foils using the water equivalent thickness of the foil’s material. From this distribution the distribution of the thickness of one foil was determined inversely. In FLUKA the heterogeneous foils were replaced by homogeneous foils and a user-routine was programmed that varies the thickness of the homogeneous foils for each simulated particle using this distribution. Results: Using the mathematical model and user-routine in FLUKA the broadening effect could be reproduced exactly when replacing the heterogeneous foils by homogeneous ones. The computation time was reduced by 90 percent. Conclusion: In this study the broadening effect on the Bragg Peak due to heterogeneous structures was analyzed, described by a mathematical model and implemented in FLUKA via user-routines. Applying these routines the computing time was reduced by 90 percent. The developed tool can be used for any heterogeneous structure in the dimensions of microns to millimeters, in principle even for organic materials like lung tissue.« less

  8. Coding for reliable satellite communications

    NASA Technical Reports Server (NTRS)

    Gaarder, N. T.; Lin, S.

    1986-01-01

    This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.

  9. GPS-Like Phasing Control of the Space Solar Power System Transmission Array

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.

    2003-01-01

    The problem of phasing of the Space Solar Power System's transmission array has been addressed by developing a GPS-like radio navigation system. The goal of this system is to provide power transmission phasing control for each node of the array that causes the power signals to add constructively at the ground reception station. The phasing control system operates in a distributed manner, which makes it practical to implement. A leader node and two radio navigation beacons are used to control the power transmission phasing of multiple follower nodes. The necessary one-way communications to the follower nodes are implemented using the RF beacon signals. The phasing control system uses differential carrier phase relative navigation/timing techniques. A special feature of the system is an integer ambiguity resolution procedure that periodically resolves carrier phase cycle count ambiguities via encoding of pseudo-random number codes on the power transmission signals. The system is capable of achieving phasing accuracies on the order of 3 mm down to 0.4 mm depending on whether the radio navigation beacons operate in the L or C bands.

  10. Clustering of neural code words revealed by a first-order phase transition

    NASA Astrophysics Data System (ADS)

    Huang, Haiping; Toyoizumi, Taro

    2016-06-01

    A network of neurons in the central nervous system collectively represents information by its spiking activity states. Typically observed states, i.e., code words, occupy only a limited portion of the state space due to constraints imposed by network interactions. Geometrical organization of code words in the state space, critical for neural information processing, is poorly understood due to its high dimensionality. Here, we explore the organization of neural code words using retinal data by computing the entropy of code words as a function of Hamming distance from a particular reference codeword. Specifically, we report that the retinal code words in the state space are divided into multiple distinct clusters separated by entropy-gaps, and that this structure is shared with well-known associative memory networks in a recallable phase. Our analysis also elucidates a special nature of the all-silent state. The all-silent state is surrounded by the densest cluster of code words and located within a reachable distance from most code words. This code-word space structure quantitatively predicts typical deviation of a state-trajectory from its initial state. Altogether, our findings reveal a non-trivial heterogeneous structure of the code-word space that shapes information representation in a biological network.

  11. Three-Dimensional Simulations of Electron Beams Focused by Periodic Permanent Magnets

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    1999-01-01

    A fully three-dimensional (3D) model of an electron beam focused by a periodic permanent magnet (PPM) stack has been developed. First, the simulation code MAFIA was used to model a PPM stack using the magnetostatic solver. The exact geometry of the magnetic focusing structure was modeled; thus, no approximations were made regarding the off-axis fields. The fields from the static solver were loaded into the 3D particle-in-cell (PIC) solver of MAFIA where fully 3D behavior of the beam was simulated in the magnetic focusing field. The PIC solver computes the time-integration of electromagnetic fields simultaneously with the time integration of the equations of motion of charged particles that move under the influence of those fields. Fields caused by those moving charges are also taken into account; thus, effects like space charge and magnetic forces between particles are fully simulated. The electron beam is simulated by a number of macro-particles. These macro-particles represent a given charge Q amounting to that of several million electrons in order to conserve computational time and memory. Particle motion is unrestricted, so particle trajectories can cross paths and move in three dimensions under the influence of 3D electric and magnetic fields. Correspondingly, there is no limit on the initial current density distribution of the electron beam, nor its density distribution at any time during the simulation. Simulation results including beam current density, percent ripple and percent transmission will be presented, and the effects current, magnetic focusing strength and thermal velocities have on beam behavior will be demonstrated using 3D movies showing the evolution of beam characteristics in time and space. Unlike typical beam optics models, this 3D model allows simulation of asymmetric designs such as non- circularly symmetric electrostatic or magnetic focusing as well as the inclusion of input/output couplers.

  12. Space-time adaptive solution of inverse problems with the discrete adjoint method

    NASA Astrophysics Data System (ADS)

    Alexe, Mihai; Sandu, Adrian

    2014-08-01

    This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The space-time adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the inter-grid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided through a multi-level optimization strategy using discontinuous Galerkin (DG) hp-adaptive discretizations paired with Runge-Kutta (RK) time integration. We extend the concept of dual (adjoint) consistency to space-time RK-DG discretizations, which are then shown to be well suited for the adaptive solution of time-dependent inverse problems. Furthermore, we prove that DG mesh transfer operators on general meshes are also dual consistent. This allows the simultaneous derivation of the discrete adjoint for both the numerical solver and the mesh transfer logic with an automatic code generation mechanism such as algorithmic differentiation (AD), potentially speeding up development of large-scale simulation codes. The theoretical analysis is supported by numerical results reported for a two-dimensional non-stationary inverse problem.

  13. 106 17 Telemetry Standards Chapter 2

    DTIC Science & Technology

    2017-07-31

    high frequency STC space -time code SOQPSK shaped offset quadrature phase shift keying UHF ultra- high frequency US&P United States...and Possessions VCO voltage-controlled oscillator VHF very- high frequency WCS Wireless Communication Service Telemetry Standards, RCC Standard...get interference. a. Telemetry Bands Air and space -to-ground telemetering is allocated in the ultra- high frequency (UHF) bands 1435 to 1535, 2200

  14. Adaptive and reliably acknowledged FSO communications

    NASA Astrophysics Data System (ADS)

    Fitz, Michael P.; Halford, Thomas R.; Kose, Cenk; Cromwell, Jonathan; Gordon, Steven

    2015-05-01

    Atmospheric turbulence causes the receive signal intensity on free space optical (FSO) communication links to vary over time. Scintillation fades can stymie connectivity for milliseconds at a time. To approach the information-theoretic limits of communication in such time-varying channels, it necessary to either code across extremely long blocks of data - thereby inducing unacceptable delays - or to vary the code rate according to the instantaneous channel conditions. We describe the design, laboratory testing, and over-the-air testing of an FSO modem that employs a protocol with adaptive coded modulation (ACM) and hybrid automatic repeat request. For links with fixed throughput, this protocol provides a 10dB reduction in the required received signal-to-noise ratio (SNR); for links with fixed range, this protocol provides the greater than a 3x increase in throughput. Independent U.S. Government tests demonstrate that our protocol effectively adapts the code rate to match the instantaneous channel conditions. The modem is able to provide throughputs in excess of 850 Mbps on links with ranges greater than 15 kilometers.

  15. A method to compute SEU fault probabilities in memory arrays with error correction

    NASA Technical Reports Server (NTRS)

    Gercek, Gokhan

    1994-01-01

    With the increasing packing densities in VLSI technology, Single Event Upsets (SEU) due to cosmic radiations are becoming more of a critical issue in the design of space avionics systems. In this paper, a method is introduced to compute the fault (mishap) probability for a computer memory of size M words. It is assumed that a Hamming code is used for each word to provide single error correction. It is also assumed that every time a memory location is read, single errors are corrected. Memory is read randomly whose distribution is assumed to be known. In such a scenario, a mishap is defined as two SEU's corrupting the same memory location prior to a read. The paper introduces a method to compute the overall mishap probability for the entire memory for a mission duration of T hours.

  16. Simulations of Neon Pellets for Plasma Disruption Mitigation in Tokamaks

    NASA Astrophysics Data System (ADS)

    Bosviel, Nicolas; Samulyak, Roman; Parks, Paul

    2017-10-01

    Numerical studies of the ablation of neon pellets in tokamaks in the plasma disruption mitigation parameter space have been performed using a time-dependent pellet ablation model based on the front tracking code FronTier-MHD. The main features of the model include the explicit tracking of the solid pellet/ablated gas interface, a self-consistent evolving potential distribution in the ablation cloud, JxB forces, atomic processes, and an improved electrical conductivity model. The equation of state model accounts for atomic processes in the ablation cloud as well as deviations from the ideal gas law in the dense, cold layers of neon gas near the pellet surface. Simulations predict processes in the ablation cloud and pellet ablation rates and address the sensitivity of pellet ablation processes to details of physics models, in particular the equation of state.

  17. Quantum dense key distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Degiovanni, I.P.; Ruo Berchera, I.; Castelletto, S.

    2004-03-01

    This paper proposes a protocol for quantum dense key distribution. This protocol embeds the benefits of a quantum dense coding and a quantum key distribution and is able to generate shared secret keys four times more efficiently than the Bennet-Brassard 1984 protocol. We hereinafter prove the security of this scheme against individual eavesdropping attacks, and we present preliminary experimental results, showing its feasibility.

  18. Tunable Bandwidth Quantum Well Infrared Photo Detector (TB-QWIP)

    DTIC Science & Technology

    2003-12-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited. TUNABLE...Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for...STATEMENT Approved for public release; distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) In this thesis a

  19. Two-Dimensional Optical CDMA System Parameters Limitations for Wavelength Hopping/Time-Spreading Scheme based on Simulation Experiment

    NASA Astrophysics Data System (ADS)

    Kandouci, Chahinaz; Djebbari, Ali

    2018-04-01

    A new family of two-dimensional optical hybrid code which employs zero cross-correlation (ZCC) codes, constructed by the balanced incomplete block design BIBD, as both time-spreading and wavelength hopping patterns are used in this paper. The obtained codes have both off-peak autocorrelation and cross-correlation values respectively equal to zero and unity. The work in this paper is a computer experiment performed using Optisystem 9.0 software program as a simulator to determine the wavelength hopping/time spreading (WH/TS) OCDMA system performances limitations. Five system parameters were considered in this work: the optical fiber length (transmission distance), the bitrate, the chip spacing and the transmitted power. This paper shows for what sufficient system performance parameters (BER≤10-9, Q≥6) the system can stand for.

  20. CPIC: a curvilinear Particle-In-Cell code for plasma-material interaction studies

    NASA Astrophysics Data System (ADS)

    Delzanno, G.; Camporeale, E.; Moulton, J. D.; Borovsky, J. E.; MacDonald, E.; Thomsen, M. F.

    2012-12-01

    We present a recently developed Particle-In-Cell (PIC) code in curvilinear geometry called CPIC (Curvilinear PIC) [1], where the standard PIC algorithm is coupled with a grid generation/adaptation strategy. Through the grid generator, which maps the physical domain to a logical domain where the grid is uniform and Cartesian, the code can simulate domains of arbitrary complexity, including the interaction of complex objects with a plasma. At present the code is electrostatic. Poisson's equation (in logical space) can be solved with either an iterative method based on the Conjugate Gradient (CG) or the Generalized Minimal Residual (GMRES) coupled with a multigrid solver used as a preconditioner, or directly with multigrid. The multigrid strategy is critical for the solver to perform optimally or nearly optimally as the dimension of the problem increases. CPIC also features a hybrid particle mover, where the computational particles are characterized by position in logical space and velocity in physical space. The advantage of a hybrid mover, as opposed to more conventional movers that move particles directly in the physical space, is that the interpolation of the particles in logical space is straightforward and computationally inexpensive, since one does not have to track the position of the particle. We will present our latest progress on the development of the code and document the code performance on standard plasma-physics tests. Then we will present the (preliminary) application of the code to a basic dynamic-charging problem, namely the charging and shielding of a spherical spacecraft in a magnetized plasma for various level of magnetization and including the pulsed emission of an electron beam from the spacecraft. The dynamical evolution of the sheath and the time-dependent current collection will be described. This study is in support of the ConnEx mission concept to use an electron beam from a magnetospheric spacecraft to trace magnetic field lines from the magnetosphere to the ionosphere [2]. [1] G.L. Delzanno, E. Camporeale, "CPIC: a new Particle-in-Cell code for plasma-material interaction studies", in preparation (2012). [2] J.E. Borovsky, D.J. McComas, M.F. Thomsen, J.L. Burch, J. Cravens, C.J. Pollock, T.E. Moore, and S.B. Mende, "Magnetosphere-Ionosphere Observatory (MIO): A multisatellite mission designed to solve the problem of what generates auroral arcs," Eos. Trans. Amer. Geophys. Union 79 (45), F744 (2000).

  1. National Space Science Data Center (NSSDC) Data Listing

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Satellite and nonsatellite data available from the National Space Science Data Center are listed. The Satellite Data listing includes the spacecraft name, launch date, and an alphabetical list of experiments. The Non-Satellite Data listing contains ground based data, models, computer routines, and composite spacecraft data. The data set name, data form code, quantity of data, and the time space covered are included in the data sets of both listings where appropriate. Geodetic tracking data sets are also included.

  2. Modelling of Space-Time Soil Moisture in Savannas and its Relation to Vegetation Patterns

    NASA Astrophysics Data System (ADS)

    Rodriguez-Iturbe, I.; Mohanty, B.; Chen, Z.

    2017-12-01

    A physically derived space-time representation of the soil moisture field is presented. It includes the incorporation of a "jitter" process acting over the space-time soil moisture field and accounting for the short distance heterogeneities in topography, soil, and vegetation characteristics. The modelling scheme allows for the representation of spatial random fluctuations of soil moisture at small spatial scales and reproduces quite well the space-time correlation structure of soil moisture from a field study in Oklahoma. It is shown that the islands of soil moisture above different thresholds have sizes which follow power distributions over an extended range of scales. A discussion is provided about the possible links of this feature with the observed power law distributions of the clusters of trees in savannas.

  3. Space-time thermodynamics of the glass transition

    NASA Astrophysics Data System (ADS)

    Merolle, Mauro; Garrahan, Juan P.; Chandler, David

    2005-08-01

    We consider the probability distribution for fluctuations in dynamical action and similar quantities related to dynamic heterogeneity. We argue that the so-called “glass transition” is a manifestation of low action tails in these distributions where the entropy of trajectory space is subextensive in time. These low action tails are a consequence of dynamic heterogeneity and an indication of phase coexistence in trajectory space. The glass transition, where the system falls out of equilibrium, is then an order-disorder phenomenon in space-time occurring at a temperature Tg, which is a weak function of measurement time. We illustrate our perspective ideas with facilitated lattice models and note how these ideas apply more generally. Author contributions: M.M., J.P.G., and D.C. performed research and wrote the paper.

  4. 500  Gb/s free-space optical transmission over strong atmospheric turbulence channels.

    PubMed

    Qu, Zhen; Djordjevic, Ivan B

    2016-07-15

    We experimentally demonstrate a high-spectral-efficiency, large-capacity, featured free-space-optical (FSO) transmission system by using low-density, parity-check (LDPC) coded quadrature phase shift keying (QPSK) combined with orbital angular momentum (OAM) multiplexing. The strong atmospheric turbulence channel is emulated by two spatial light modulators on which four randomly generated azimuthal phase patterns yielding the Andrews spectrum are recorded. The validity of such an approach is verified by reproducing the intensity distribution and irradiance correlation function (ICF) from the full-scale simulator. Excellent agreement of experimental, numerical, and analytical results is found. To reduce the phase distortion induced by the turbulence emulator, the inexpensive wavefront sensorless adaptive optics (AO) is used. To deal with remaining channel impairments, a large-girth LDPC code is used. To further improve the aggregate data rate, the OAM multiplexing is combined with WDM, and 500 Gb/s optical transmission over the strong atmospheric turbulence channels is demonstrated.

  5. The integrated design and archive of space-borne signal processing and compression coding

    NASA Astrophysics Data System (ADS)

    He, Qiang-min; Su, Hao-hang; Wu, Wen-bo

    2017-10-01

    With the increasing demand of users for the extraction of remote sensing image information, it is very urgent to significantly enhance the whole system's imaging quality and imaging ability by using the integrated design to achieve its compact structure, light quality and higher attitude maneuver ability. At this present stage, the remote sensing camera's video signal processing unit and image compression and coding unit are distributed in different devices. The volume, weight and consumption of these two units is relatively large, which unable to meet the requirements of the high mobility remote sensing camera. This paper according to the high mobility remote sensing camera's technical requirements, designs a kind of space-borne integrated signal processing and compression circuit by researching a variety of technologies, such as the high speed and high density analog-digital mixed PCB design, the embedded DSP technology and the image compression technology based on the special-purpose chips. This circuit lays a solid foundation for the research of the high mobility remote sensing camera.

  6. Trajectory-based heating analysis for the European Space Agency/Rosetta Earth Return Vehicle

    NASA Technical Reports Server (NTRS)

    Henline, William D.; Tauber, Michael E.

    1994-01-01

    A coupled, trajectory-based flowfield and material thermal-response analysis is presented for the European Space Agency proposed Rosetta comet nucleus sample return vehicle. The probe returns to earth along a hyperbolic trajectory with an entry velocity of 16.5 km/s and requires an ablative heat shield on the forebody. Combined radiative and convective ablating flowfield analyses were performed for the significant heating portion of the shallow ballistic entry trajectory. Both quasisteady ablation and fully transient analyses were performed for a heat shield composed of carbon-phenolic ablative material. Quasisteady analysis was performed using the two-dimensional axisymmetric codes RASLE and BLIMPK. Transient computational results were obtained from the one-dimensional ablation/conduction code CMA. Results are presented for heating, temperature, and ablation rate distributions over the probe forebody for various trajectory points. Comparison of transient and quasisteady results indicates that, for the heating pulse encountered by this probe, the quasisteady approach is conservative from the standpoint of predicted surface recession.

  7. Investigation of HZETRN 2010 as a Tool for Single Event Effect Qualification of Avionics Systems

    NASA Technical Reports Server (NTRS)

    Rojdev, Kristina; Atwell, William; Boeder, Paul; Koontz, Steve

    2014-01-01

    NASA's future missions are focused on deep space for human exploration that do not provide a simple emergency return to Earth. In addition, the deep space environment contains a constant background Galactic Cosmic Ray (GCR) radiation exposure, as well as periodic Solar Particle Events (SPEs) that can produce intense amounts of radiation in a short amount of time. Given these conditions, it is important that the avionics systems for deep space human missions are not susceptible to Single Event Effects (SEE) that can occur from radiation interactions with electronic components. The typical process to minimizing SEE effects is through using heritage hardware and extensive testing programs that are very costly. Previous work by Koontz, et al. [1] utilized an analysis-based method for investigating electronic component susceptibility. In their paper, FLUKA, a Monte Carlo transport code, was used to calculate SEE and single event upset (SEU) rates. This code was then validated against in-flight data. In addition, CREME-96, a deterministic code, was also compared with FLUKA and in-flight data. However, FLUKA has a long run-time (on the order of days), and CREME-96 has not been updated in several years. This paper will investigate the use of HZETRN 2010, a deterministic transport code developed at NASA Langley Research Center, as another tool that can be used to analyze SEE and SEU rates. The benefits to using HZETRN over FLUKA and CREME-96 are that it has a very fast run time (on the order of minutes) and has been shown to be of similar accuracy as other deterministic and Monte Carlo codes when considering dose [2, 3, 4]. The 2010 version of HZETRN has updated its treatment of secondary neutrons and thus has improved its accuracy over previous versions. In this paper, the Linear Energy Transfer (LET) spectra are of interest rather than the total ionizing dose. Therefore, the LET spectra output from HZETRN 2010 will be compared with the FLUKA and in-flight data to validate HZETRN 2010 as a computational tool for SEE qualification by analysis. Furthermore, extrapolation of these data to interplanetary environments at 1 AU will be investigated to determine whether HZETRN 2010 can be used successfully and confidently for deep space mission analyses.

  8. Extending DIRAC File Management with Erasure-Coding for efficient storage.

    NASA Astrophysics Data System (ADS)

    Cadellin Skipsey, Samuel; Todev, Paulin; Britton, David; Crooks, David; Roy, Gareth

    2015-12-01

    The state of the art in Grid style data management is to achieve increased resilience of data via multiple complete replicas of data files across multiple storage endpoints. While this is effective, it is not the most space-efficient approach to resilience, especially when the reliability of individual storage endpoints is sufficiently high that only a few will be inactive at any point in time. We report on work performed as part of GridPP[1], extending the Dirac File Catalogue and file management interface to allow the placement of erasure-coded files: each file distributed as N identically-sized chunks of data striped across a vector of storage endpoints, encoded such that any M chunks can be lost and the original file can be reconstructed. The tools developed are transparent to the user, and, as well as allowing up and downloading of data to Grid storage, also provide the possibility of parallelising access across all of the distributed chunks at once, improving data transfer and IO performance. We expect this approach to be of most interest to smaller VOs, who have tighter bounds on the storage available to them, but larger (WLCG) VOs may be interested as their total data increases during Run 2. We provide an analysis of the costs and benefits of the approach, along with future development and implementation plans in this area. In general, overheads for multiple file transfers provide the largest issue for competitiveness of this approach at present.

  9. GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no

    2013-11-10

    We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less

  10. Hypersonic Viscous Flow Over Large Roughness Elements

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Choudhari, Meelan M.

    2009-01-01

    Viscous flow over discrete or distributed surface roughness has great implications for hypersonic flight due to aerothermodynamic considerations related to laminar-turbulent transition. Current prediction capability is greatly hampered by the limited knowledge base for such flows. To help fill that gap, numerical computations are used to investigate the intricate flow physics involved. An unstructured mesh, compressible Navier-Stokes code based on the space-time conservation element, solution element (CESE) method is used to perform time-accurate Navier-Stokes calculations for two roughness shapes investigated in wind tunnel experiments at NASA Langley Research Center. It was found through 2D parametric study that at subcritical Reynolds numbers of the boundary layers, absolute instability resulting in vortex shedding downstream, is likely to weaken at supersonic free-stream conditions. On the other hand, convective instability may be the dominant mechanism for supersonic boundary layers. Three-dimensional calculations for a rectangular or cylindrical roughness element at post-shock Mach numbers of 4.1 and 6.5 also confirm that no self-sustained vortex generation is present.

  11. Test Analysis Tools to Ensure Higher Quality of On-Board Real Time Software for Space Applications

    NASA Astrophysics Data System (ADS)

    Boudillet, O.; Mescam, J.-C.; Dalemagne, D.

    2008-08-01

    EADS Astrium Space Transportation, in its Les Mureaux premises, is responsible for the French M51 nuclear deterrent missile onboard SW. There was also developed over 1 million of line of code, mostly in ADA, for the Automated Transfer Vehicle (ATV) onboard SW and the flight control SW of the ARIANE5 launcher which has put it into orbit. As part of the ATV SW, ASTRIUM ST has developed the first Category A SW ever qualified for a European space application. To ensure that all these embedded SW have been developed with the highest quality and reliability level, specific development tools have been designed to cover the steps of source code verification, automated validation test or complete target instruction coverage verification. Three of such dedicated tools are presented here.

  12. Construction and Utilization of a Beowulf Computing Cluster: A User's Perspective

    NASA Technical Reports Server (NTRS)

    Woods, Judy L.; West, Jeff S.; Sulyma, Peter R.

    2000-01-01

    Lockheed Martin Space Operations - Stennis Programs (LMSO) at the John C Stennis Space Center (NASA/SSC) has designed and built a Beowulf computer cluster which is owned by NASA/SSC and operated by LMSO. The design and construction of the cluster are detailed in this paper. The cluster is currently used for Computational Fluid Dynamics (CFD) simulations. The CFD codes in use and their applications are discussed. Examples of some of the work are also presented. Performance benchmark studies have been conducted for the CFD codes being run on the cluster. The results of two of the studies are presented and discussed. The cluster is not currently being utilized to its full potential; therefore, plans are underway to add more capabilities. These include the addition of structural, thermal, fluid, and acoustic Finite Element Analysis codes as well as real-time data acquisition and processing during test operations at NASA/SSC. These plans are discussed as well.

  13. The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding.

    PubMed

    Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco

    2017-01-01

    The recent "deep learning revolution" in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems.

  14. The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding

    PubMed Central

    Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco

    2017-01-01

    The recent “deep learning revolution” in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems. PMID:28377709

  15. Design and implementation of a scene-dependent dynamically selfadaptable wavefront coding imaging system

    NASA Astrophysics Data System (ADS)

    Carles, Guillem; Ferran, Carme; Carnicer, Artur; Bosch, Salvador

    2012-01-01

    A computational imaging system based on wavefront coding is presented. Wavefront coding provides an extension of the depth-of-field at the expense of a slight reduction of image quality. This trade-off results from the amount of coding used. By using spatial light modulators, a flexible coding is achieved which permits it to be increased or decreased as needed. In this paper a computational method is proposed for evaluating the output of a wavefront coding imaging system equipped with a spatial light modulator, with the aim of thus making it possible to implement the most suitable coding strength for a given scene. This is achieved in an unsupervised manner, thus the whole system acts as a dynamically selfadaptable imaging system. The program presented here controls the spatial light modulator and the camera, and also processes the images in a synchronised way in order to implement the dynamic system in real time. A prototype of the system was implemented in the laboratory and illustrative examples of the performance are reported in this paper. Program summaryProgram title: DynWFC (Dynamic WaveFront Coding) Catalogue identifier: AEKC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 483 No. of bytes in distributed program, including test data, etc.: 2 437 713 Distribution format: tar.gz Programming language: Labview 8.5 and NI Vision and MinGW C Compiler Computer: Tested on PC Intel ® Pentium ® Operating system: Tested on Windows XP Classification: 18 Nature of problem: The program implements an enhanced wavefront coding imaging system able to adapt the degree of coding to the requirements of a specific scene. The program controls the acquisition by a camera, the display of a spatial light modulator and the image processing operations synchronously. The spatial light modulator is used to implement the phase mask with flexibility given the trade-off between depth-of-field extension and image quality achieved. The action of the program is to evaluate the depth-of-field requirements of the specific scene and subsequently control the coding established by the spatial light modulator, in real time.

  16. Direct-Y: Fast Acquisition of the GPS PPS Signal

    NASA Technical Reports Server (NTRS)

    Namoos, Omar M.; DiEsposti, Raymond S.

    1996-01-01

    The NAVSTAR Global Positioning System (GPS) provides positioning and time information to military users via the Precise Positioning Service (PPS) which typically allows users a significant margin of precision over the commercially available Standard Positioning Service (SPS), Military sets that rely on first acquiring the SPS Coarse Acquisition (C/A) code, read from the data message the handover word (HOW) that provides the time-of-signal transmission needed to acquire and lock onto the PPS Y-code. Under extreme battlefield conditions, the use of GPS would be denied to the warfighter who cannot pick up the un-encrypted C/A code. Studies are underway at the GPS Joint Program Office (JPO) at the Space and Missile Center, Los Angeles Air Force Base that are aimed at developing the capability to directly acquire Y-code without first acquiring C/A code. This paper briefly outlines efforts to develop 'direct-Y' acquisition, and various approaches to solving this problem. The potential ramifications of direct-Y to military users are also discussed.

  17. Performance of a parallel code for the Euler equations on hypercube computers

    NASA Technical Reports Server (NTRS)

    Barszcz, Eric; Chan, Tony F.; Jesperson, Dennis C.; Tuminaro, Raymond S.

    1990-01-01

    The performance of hypercubes were evaluated on a computational fluid dynamics problem and the parallel environment issues were considered that must be addressed, such as algorithm changes, implementation choices, programming effort, and programming environment. The evaluation focuses on a widely used fluid dynamics code, FLO52, which solves the two dimensional steady Euler equations describing flow around the airfoil. The code development experience is described, including interacting with the operating system, utilizing the message-passing communication system, and code modifications necessary to increase parallel efficiency. Results from two hypercube parallel computers (a 16-node iPSC/2, and a 512-node NCUBE/ten) are discussed and compared. In addition, a mathematical model of the execution time was developed as a function of several machine and algorithm parameters. This model accurately predicts the actual run times obtained and is used to explore the performance of the code in interesting but yet physically realizable regions of the parameter space. Based on this model, predictions about future hypercubes are made.

  18. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2011-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less

  19. A novel neutron energy spectrum unfolding code using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Shahabinejad, H.; Sohrabpour, M.

    2017-07-01

    A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code.

  20. Limits of applicability of the quasilinear approximation to the electrostatic wave-plasma interaction

    NASA Astrophysics Data System (ADS)

    Zacharegkas, Georgios; Isliker, Heinz; Vlahos, Loukas

    2016-11-01

    The limitation of the Quasilinear Theory (QLT) to describe the diffusion of electrons and ions in velocity space when interacting with a spectrum of large amplitude electrostatic Langmuir, Upper and Lower hybrid waves, is analyzed. We analytically and numerically estimate the threshold for the amplitude of the waves above which the QLT breaks down, using a test particle code. The evolution of the velocity distribution, the velocity-space diffusion coefficients, the driven current, and the heating of the particles are investigated, for the interaction with small and large amplitude electrostatic waves, that is, in both regimes, where QLT is valid and where it clearly breaks down.

  1. Viriato: a Fourier-Hermite spectral code for strongly magnetised fluid-kinetic plasma dynamics

    NASA Astrophysics Data System (ADS)

    Loureiro, Nuno; Dorland, William; Fazendeiro, Luis; Kanekar, Anjor; Mallet, Alfred; Zocco, Alessandro

    2015-11-01

    We report on the algorithms and numerical methods used in Viriato, a novel fluid-kinetic code that solves two distinct sets of equations: (i) the Kinetic Reduced Electron Heating Model equations [Zocco & Schekochihin, 2011] and (ii) the kinetic reduced MHD (KRMHD) equations [Schekochihin et al., 2009]. Two main applications of these equations are magnetised (Alfvnénic) plasma turbulence and magnetic reconnection. Viriato uses operator splitting to separate the dynamics parallel and perpendicular to the ambient magnetic field (assumed strong). Along the magnetic field, Viriato allows for either a second-order accurate MacCormack method or, for higher accuracy, a spectral-like scheme. Perpendicular to the field Viriato is pseudo-spectral, and the time integration is performed by means of an iterative predictor-corrector scheme. In addition, a distinctive feature of Viriato is its spectral representation of the parallel velocity-space dependence, achieved by means of a Hermite representation of the perturbed distribution function. A series of linear and nonlinear benchmarks and tests are presented, with focus on 3D decaying kinetic turbulence. Work partially supported by Fundação para a Ciência e Tecnologia via Grants UID/FIS/50010/2013 and IF/00530/2013.

  2. On the temperature prediction in a fire escape passage

    NASA Astrophysics Data System (ADS)

    Casano, G.; Piva, S.

    2017-11-01

    Fire safety engineering requires a detailed understanding of fire behaviour and of its effects on structures and people. Many factors may condition the fire scenario, as for example, heat transfer between the flame and the boundary structures. Currently advanced numerical codes for the prediction of the fire behaviour are available. However, these solutions often require heavy calculations and long times. In this context analytical solutions can be useful for a fast analysis of simplified schematizations. After that, it is more effective the final utilization of the advanced fire codes. In this contribution, the temperature in a fire escape passage, separated with a thermally resistant wall from a fire room, is analysed. The escape space is included in a building where the neighbouring rooms are at a constant undisturbed temperature. The presence of the neighbouring rooms is considered with an equivalent heat transfer coefficient, in a boundary condition of the third type. An analytical solution is used to predict the temperature distribution during the fire. It allows to obtain useful information on the temperature reached in the escape area in contact with a burning room; it is useful also for a fast choice of the thermal characteristics of a firewall.

  3. Representation of Serendipitous Scientific Data

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    A computer program defines and implements an innovative kind of data structure than can be used for representing information derived from serendipitous discoveries made via collection of scientific data on long exploratory spacecraft missions. Data structures capable of collecting any kind of data can easily be implemented in advance, but the task of designing a fixed and efficient data structure suitable for processing raw data into useful information and taking advantage of serendipitous scientific discovery is becoming increasingly difficult as missions go deeper into space. The present software eases the task by enabling definition of arbitrarily complex data structures that can adapt at run time as raw data are transformed into other types of information. This software runs on a variety of computers, and can be distributed in either source code or binary code form. It must be run in conjunction with any one of a number of Lisp compilers that are available commercially or as shareware. It has no specific memory requirements and depends upon the other software with which it is used. This program is implemented as a library that is called by, and becomes folded into, the other software with which it is used.

  4. Visco-elastic controlled-source full waveform inversion without surface waves

    NASA Astrophysics Data System (ADS)

    Paschke, Marco; Krause, Martin; Bleibinhaus, Florian

    2016-04-01

    We developed a frequency-domain visco-elastic full waveform inversion for onshore seismic experiments with topography. The forward modeling is based on a finite-difference time-domain algorithm by Robertsson that uses the image-method to ensure a stress-free condition at the surface. The time-domain data is Fourier-transformed at every point in the model space during the forward modeling for a given set of frequencies. The motivation for this approach is the reduced amount of memory when computing kernels, and the straightforward implementation of the multiscale approach. For the inversion, we calculate the Frechet derivative matrix explicitly, and we implement a Levenberg-Marquardt scheme that allows for computing the resolution matrix. To reduce the size of the Frechet derivative matrix, and to stabilize the inversion, an adapted inverse mesh is used. The node spacing is controlled by the velocity distribution and the chosen frequencies. To focus the inversion on body waves (P, P-coda, and S) we mute the surface waves from the data. Consistent spatiotemporal weighting factors are applied to the wavefields during the Fourier transform to obtain the corresponding kernels. We test our code with a synthetic study using the Marmousi model with arbitrary topography. This study also demonstrates the importance of topography and muting surface waves in controlled-source full waveform inversion.

  5. Effects of Refresher Training on Job-Task Typewriting Performance

    DTIC Science & Technology

    1979-10-01

    preceded by some blank space. How much space? The scanner requires not less than a sixth nor more than a half inch of space. The postal department stresses ...postal department stresses that the code be not less than two nor more than six spaces away from the state. INSERT NEW PAPER AND BEGIN TYPING WITH THE...Questionnaire Project 2 Home 1. Name Phone Work 2. Social Security Number i 3. NOS 4. Time in 0S Months 5. Location of AIT 6. Sex 7. Grade(circle one) El E2 E3 E4

  6. Differentiating induced and natural seismicity using space-time-magnitude statistics applied to the Coso Geothermal field

    USGS Publications Warehouse

    Schoenball, Martin; Davatzes, Nicholas C.; Glen, Jonathan M. G.

    2015-01-01

    A remarkable characteristic of earthquakes is their clustering in time and space, displaying their self-similarity. It remains to be tested if natural and induced earthquakes share the same behavior. We study natural and induced earthquakes comparatively in the same tectonic setting at the Coso Geothermal Field. Covering the preproduction and coproduction periods from 1981 to 2013, we analyze interevent times, spatial dimension, and frequency-size distributions for natural and induced earthquakes. Individually, these distributions are statistically indistinguishable. Determining the distribution of nearest neighbor distances in a combined space-time-magnitude metric, lets us identify clear differences between both kinds of seismicity. Compared to natural earthquakes, induced earthquakes feature a larger population of background seismicity and nearest neighbors at large magnitude rescaled times and small magnitude rescaled distances. Local stress perturbations induced by field operations appear to be strong enough to drive local faults through several seismic cycles and reactivate them after time periods on the order of a year.

  7. A Radiation Chemistry Code Based on the Greens Functions of the Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Wu, Honglu

    2014-01-01

    Ionizing radiation produces several radiolytic species such as.OH, e-aq, and H. when interacting with biological matter. Following their creation, radiolytic species diffuse and chemically react with biological molecules such as DNA. Despite years of research, many questions on the DNA damage by ionizing radiation remains, notably on the indirect effect, i.e. the damage resulting from the reactions of the radiolytic species with DNA. To simulate DNA damage by ionizing radiation, we are developing a step-by-step radiation chemistry code that is based on the Green's functions of the diffusion equation (GFDE), which is able to follow the trajectories of all particles and their reactions with time. In the recent years, simulations based on the GFDE have been used extensively in biochemistry, notably to simulate biochemical networks in time and space and are often used as the "gold standard" to validate diffusion-reaction theories. The exact GFDE for partially diffusion-controlled reactions is difficult to use because of its complex form. Therefore, the radial Green's function, which is much simpler, is often used. Hence, much effort has been devoted to the sampling of the radial Green's functions, for which we have developed a sampling algorithm This algorithm only yields the inter-particle distance vector length after a time step; the sampling of the deviation angle of the inter-particle vector is not taken into consideration. In this work, we show that the radial distribution is predicted by the exact radial Green's function. We also use a technique developed by Clifford et al. to generate the inter-particle vector deviation angles, knowing the inter-particle vector length before and after a time step. The results are compared with those predicted by the exact GFDE and by the analytical angular functions for free diffusion. This first step in the creation of the radiation chemistry code should help the understanding of the contribution of the indirect effect in the formation of DNA damage and double-strand breaks.

  8. A flexibly shaped space-time scan statistic for disease outbreak detection and monitoring.

    PubMed

    Takahashi, Kunihiko; Kulldorff, Martin; Tango, Toshiro; Yih, Katherine

    2008-04-11

    Early detection of disease outbreaks enables public health officials to implement disease control and prevention measures at the earliest possible time. A time periodic geographical disease surveillance system based on a cylindrical space-time scan statistic has been used extensively for disease surveillance along with the SaTScan software. In the purely spatial setting, many different methods have been proposed to detect spatial disease clusters. In particular, some spatial scan statistics are aimed at detecting irregularly shaped clusters which may not be detected by the circular spatial scan statistic. Based on the flexible purely spatial scan statistic, we propose a flexibly shaped space-time scan statistic for early detection of disease outbreaks. The performance of the proposed space-time scan statistic is compared with that of the cylindrical scan statistic using benchmark data. In order to compare their performances, we have developed a space-time power distribution by extending the purely spatial bivariate power distribution. Daily syndromic surveillance data in Massachusetts, USA, are used to illustrate the proposed test statistic. The flexible space-time scan statistic is well suited for detecting and monitoring disease outbreaks in irregularly shaped areas.

  9. Physics and biophysics experiments needed for improved risk assessment in space

    NASA Astrophysics Data System (ADS)

    Sihver, L.

    To improve the risk assessment of radiation carcinogenesis, late degenerative tissue effects, acute syndromes, synergistic effects of radiation and microgravity or other spacecraft factors, and hereditary effects, on future LEO and interplanetary space missions, the radiobiological effects of cosmic radiation before and after shielding must be well understood. However, cosmic radiation is very complex and includes low and high LET components of many different neutral and charged particles. The understanding of the radiobiology of the heavy ions, from GCRs and SPEs, is still a subject of great concern due to the complicated dependence of their biological effects on the type of ion and energy, and its interaction with various targets both outside and within the spacecraft and the human body. In order to estimate the biological effects of cosmic radiation, accurate knowledge of the physics of the interactions of both charged and non-charged high-LET particles is necessary. Since it is practically impossible to measure all primary and secondary particles from all projectile-target-energy combinations needed for a correct risk assessment in space, accurate particle and heavy ion transport codes might be a helpful instrument to overcome those difficulties. These codes have to be carefully validated to make sure they fulfill preset accuracy criteria, e.g. to be able to predict particle fluence and energy distributions within a certain accuracy. When validating the accuracy of the transport codes, both space and ground-based accelerator experiments are needed. In this paper current and future physics and biophysics experiments needed for improved risk assessment in space will be discussed. The cyclotron HIRFL (heavy ion research facility in Lanzhou) and the new synchrotron CSR (cooling storage ring), which can be used to provide ion beams for space related experiments at the Institute of Modern Physics, Chinese Academy of Sciences (IMP-CAS), will be presented together with the physical and biomedical research performed at IMP-CAS.

  10. Reeds computer code

    NASA Technical Reports Server (NTRS)

    Bjork, C.

    1981-01-01

    The REEDS (rocket exhaust effluent diffusion single layer) computer code is used for the estimation of certain rocket exhaust effluent concentrations and dosages and their distributions near the Earth's surface following a rocket launch event. Output from REEDS is used in producing near real time air quality and environmental assessments of the effects of certain potentially harmful effluents, namely HCl, Al2O3, CO, and NO.

  11. Investigating the Relationship between Customer Wait Time and Operational Availability through Simulation Modeling

    DTIC Science & Technology

    2012-12-01

    STATEMENT Approved for public release; distribution is unlimited 12b. DISTRIBUTION CODE A 13. ABSTRACT (maximum 200 words) Customer Wait Time ( CWT ...inventory level, thereby increasing the material readiness of the operating forces. Intuitively, decreasing CWT increases operational availability (Ao...and CWT has led to arbitrary stock policies that do not account for the cost and benefit they provide. This project centers on monetizing the

  12. Application of statistical distribution theory to launch-on-time for space construction logistic support

    NASA Technical Reports Server (NTRS)

    Morgenthaler, George W.

    1989-01-01

    The ability to launch-on-time and to send payloads into space has progressed dramatically since the days of the earliest missile and space programs. Causes for delay during launch, i.e., unplanned 'holds', are attributable to several sources: weather, range activities, vehicle conditions, human performance, etc. Recent developments in space program, particularly the need for highly reliable logistic support of space construction and the subsequent planned operation of space stations, large unmanned space structures, lunar and Mars bases, and the necessity of providing 'guaranteed' commercial launches have placed increased emphasis on understanding and mastering every aspect of launch vehicle operations. The Center of Space Construction has acquired historical launch vehicle data and is applying these data to the analysis of space launch vehicle logistic support of space construction. This analysis will include development of a better understanding of launch-on-time capability and simulation of required support systems for vehicle assembly and launch which are necessary to support national space program construction schedules. In this paper, the author presents actual launch data on unscheduled 'hold' distributions of various launch vehicles. The data have been supplied by industrial associate companies of the Center for Space Construction. The paper seeks to determine suitable probability models which describe these historical data and that can be used for several purposes such as: inputs to broader simulations of launch vehicle logistic space construction support processes and the determination of which launch operations sources cause the majority of the unscheduled 'holds', and hence to suggest changes which might improve launch-on-time. In particular, the paper investigates the ability of a compound distribution probability model to fit actual data, versus alternative models, and recommends the most productive avenues for future statistical work.

  13. Panchromatic spectral energy distributions of Herschel sources

    NASA Astrophysics Data System (ADS)

    Berta, S.; Lutz, D.; Santini, P.; Wuyts, S.; Rosario, D.; Brisbin, D.; Cooray, A.; Franceschini, A.; Gruppioni, C.; Hatziminaoglou, E.; Hwang, H. S.; Le Floc'h, E.; Magnelli, B.; Nordon, R.; Oliver, S.; Page, M. J.; Popesso, P.; Pozzetti, L.; Pozzi, F.; Riguccini, L.; Rodighiero, G.; Roseboom, I.; Scott, D.; Symeonidis, M.; Valtchanov, I.; Viero, M.; Wang, L.

    2013-03-01

    Combining far-infrared Herschel photometry from the PACS Evolutionary Probe (PEP) and Herschel Multi-tiered Extragalactic Survey (HerMES) guaranteed time programs with ancillary datasets in the GOODS-N, GOODS-S, and COSMOS fields, it is possible to sample the 8-500 μm spectral energy distributions (SEDs) of galaxies with at least 7-10 bands. Extending to the UV, optical, and near-infrared, the number of bands increases up to 43. We reproduce the distribution of galaxies in a carefully selected restframe ten colors space, based on this rich data-set, using a superposition of multivariate Gaussian modes. We use this model to classify galaxies and build median SEDs of each class, which are then fitted with a modified version of the magphys code that combines stellar light, emission from dust heated by stars and a possible warm dust contribution heated by an active galactic nucleus (AGN). The color distribution of galaxies in each of the considered fields can be well described with the combination of 6-9 classes, spanning a large range of far- to near-infrared luminosity ratios, as well as different strength of the AGN contribution to bolometric luminosities. The defined Gaussian grouping is used to identify rare or odd sources. The zoology of outliers includes Herschel-detected ellipticals, very blue z ~ 1 Ly-break galaxies, quiescent spirals, and torus-dominated AGN with star formation. Out of these groups and outliers, a new template library is assembled, consisting of 32 SEDs describing the intrinsic scatter in the restframe UV-to-submm colors of infrared galaxies. This library is tested against L(IR) estimates with and without Herschel data included, and compared to eightother popular methods often adopted in the literature. When implementing Herschel photometry, these approaches produce L(IR) values consistent with each other within a median absolute deviation of 10-20%, the scatter being dominated more by fine tuning of the codes, rather than by the choice of SED templates. Finally, the library is used to classify 24 μm detected sources in PEP GOODS fields on the basis of AGN content, L(60)/L(100) color and L(160)/L(1.6) luminosity ratio. AGN appear to be distributed in the stellar mass (M∗) vs. star formation rate (SFR) space along with all other galaxies, regardless of the amount of infrared luminosity they are powering, with the tendency to lie on the high SFR side of the "main sequence". The incidence of warmer star-forming sources grows for objects with higher specific star formation rates (sSFR), and they tend to populate the "off-sequence" region of the M∗ - SFR - z space. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.Appendix A is available in electronic form at http://www.aanda.orgGalaxy SED templates are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/551/A100

  14. UPC++ Programmer’s Guide (v1.0 2017.9)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bachan, J.; Baden, S.; Bonachea, D.

    UPC++ is a C++11 library that provides Asynchronous Partitioned Global Address Space (APGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The APGAS model is single program, multiple-data (SPMD), with each separate thread of execution (referred to as a rank, a term borrowed from MPI) having access to local memory as it would in C++. However, APGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the ranks. UPC++ provides numerous methods for accessing and using global memory. In UPC++, allmore » operations that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.« less

  15. UPC++ Programmer’s Guide, v1.0-2018.3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bachan, J.; Baden, S.; Bonachea, Dan

    UPC++ is a C++11 library that provides Partitioned Global Address Space (PGAS) programming. It is designed for writing parallel programs that run efficiently and scale well on distributed-memory parallel computers. The PGAS model is single program, multiple-data (SPMD), with each separate thread of execution (referred to as a rank, a term borrowed from MPI) having access to local memory as it would in C++. However, PGAS also provides access to a global address space, which is allocated in shared segments that are distributed over the ranks. UPC++ provides numerous methods for accessing and using global memory. In UPC++, all operationsmore » that access remote memory are explicit, which encourages programmers to be aware of the cost of communication and data movement. Moreover, all remote-memory access operations are by default asynchronous, to enable programmers to write code that scales well even on hundreds of thousands of cores.« less

  16. LEOPARD: A grid-based dispersion relation solver for arbitrary gyrotropic distributions

    NASA Astrophysics Data System (ADS)

    Astfalk, Patrick; Jenko, Frank

    2017-01-01

    Particle velocity distributions measured in collisionless space plasmas often show strong deviations from idealized model distributions. Despite this observational evidence, linear wave analysis in space plasma environments such as the solar wind or Earth's magnetosphere is still mainly carried out using dispersion relation solvers based on Maxwellians or other parametric models. To enable a more realistic analysis, we present the new grid-based kinetic dispersion relation solver LEOPARD (Linear Electromagnetic Oscillations in Plasmas with Arbitrary Rotationally-symmetric Distributions) which no longer requires prescribed model distributions but allows for arbitrary gyrotropic distribution functions. In this work, we discuss the underlying numerical scheme of the code and we show a few exemplary benchmarks. Furthermore, we demonstrate a first application of LEOPARD to ion distribution data obtained from hybrid simulations. In particular, we show that in the saturation stage of the parallel fire hose instability, the deformation of the initial bi-Maxwellian distribution invalidates the use of standard dispersion relation solvers. A linear solver based on bi-Maxwellians predicts further growth even after saturation, while LEOPARD correctly indicates vanishing growth rates. We also discuss how this complies with former studies on the validity of quasilinear theory for the resonant fire hose. In the end, we briefly comment on the role of LEOPARD in directly analyzing spacecraft data, and we refer to an upcoming paper which demonstrates a first application of that kind.

  17. Recent progress in distributed optical fiber Raman photon sensors at China Jiliang University

    NASA Astrophysics Data System (ADS)

    Zhang, Zaixuan; Wang, Jianfeng; Li, Yi; Gong, Huaping; Yu, Xiangdong; Liu, Honglin; Jin, Yongxing; Kang, Juan; Li, Chenxia; Zhang, Wensheng; Zhang, Wenping; Niu, Xiaohui; Sun, Zhongzhou; Zhao, Chunliu; Dong, Xinyong; Jin, Shangzhong

    2012-06-01

    A brief review of recent progress in researches, productions and applications of full distributed fiber Raman photon sensors at China Jiliang University (CJLU) is presented. In order to improve the measurement distance, the accuracy, the space resolution, the ability of multi-parameter measurements, and the intelligence of full distributed fiber sensor systems, a new generation fiber sensor technology based on the optical fiber nonlinear scattering fusion principle is proposed. A series of new generation full distributed fiber sensors are investigated and designed, which consist of new generation ultra-long distance full distributed fiber Raman and Rayleigh scattering photon sensors integrated with a fiber Raman amplifier, auto-correction full distributed fiber Raman photon temperature sensors based on Raman correlation dual sources, full distributed fiber Raman photon temperature sensors based on a pulse coding source, full distributed fiber Raman photon temperature sensors using a fiber Raman wavelength shifter, a new type of Brillouin optical time domain analyzers (BOTDAs) integrated with a fiber Raman amplifier for replacing a fiber Brillouin amplifier, full distributed fiber Raman and Brillouin photon sensors integrated with a fiber Raman amplifier, and full distributed fiber Brillouin photon sensors integrated with a fiber Brillouin frequency shifter. The Internet of things is believed as one of candidates of the next technological revolution, which has driven hundreds of millions of class markets. Sensor networks are important components of the Internet of things. The full distributed optical fiber sensor network (Rayleigh, Raman, and Brillouin scattering) is a 3S (smart materials, smart structure, and smart skill) system, which is easy to construct smart fiber sensor networks. The distributed optical fiber sensor can be embedded in the power grids, railways, bridges, tunnels, roads, constructions, water supply systems, dams, oil and gas pipelines and other facilities, and can be integrated with wireless networks.

  18. Probabilistic structural analysis of aerospace components using NESSUS

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Nagpal, Vinod K.; Chamis, Christos C.

    1988-01-01

    Probabilistic structural analysis of a Space Shuttle main engine turbopump blade is conducted using the computer code NESSUS (numerical evaluation of stochastic structures under stress). The goal of the analysis is to derive probabilistic characteristics of blade response given probabilistic descriptions of uncertainties in blade geometry, material properties, and temperature and pressure distributions. Probability densities are derived for critical blade responses. Risk assessment and failure life analysis is conducted assuming different failure models.

  19. A Large-Particle Monte Carlo Code for Simulating Non-Linear High-Energy Processes Near Compact Objects

    NASA Technical Reports Server (NTRS)

    Stern, Boris E.; Svensson, Roland; Begelman, Mitchell C.; Sikora, Marek

    1995-01-01

    High-energy radiation processes in compact cosmic objects are often expected to have a strongly non-linear behavior. Such behavior is shown, for example, by electron-positron pair cascades and the time evolution of relativistic proton distributions in dense radiation fields. Three independent techniques have been developed to simulate these non-linear problems: the kinetic equation approach; the phase-space density (PSD) Monte Carlo method; and the large-particle (LP) Monte Carlo method. In this paper, we present the latest version of the LP method and compare it with the other methods. The efficiency of the method in treating geometrically complex problems is illustrated by showing results of simulations of 1D, 2D and 3D systems. The method is shown to be powerful enough to treat non-spherical geometries, including such effects as bulk motion of the background plasma, reflection of radiation from cold matter, and anisotropic distributions of radiating particles. It can therefore be applied to simulate high-energy processes in such astrophysical systems as accretion discs with coronae, relativistic jets, pulsar magnetospheres and gamma-ray bursts.

  20. A new approach to fluid-structure interaction within graphics hardware accelerated smooth particle hydrodynamics considering heterogeneous particle size distribution

    NASA Astrophysics Data System (ADS)

    Eghtesad, Adnan; Knezevic, Marko

    2018-07-01

    A corrective smooth particle method (CSPM) within smooth particle hydrodynamics (SPH) is used to study the deformation of an aircraft structure under high-velocity water-ditching impact load. The CSPM-SPH method features a new approach for the prediction of two-way fluid-structure interaction coupling. Results indicate that the implementation is well suited for modeling the deformation of structures under high-velocity impact into water as evident from the predicted stress and strain localizations in the aircraft structure as well as the integrity of the impacted interfaces, which show no artificial particle penetrations. To reduce the simulation time, a heterogeneous particle size distribution over a complex three-dimensional geometry is used. The variable particle size is achieved from a finite element mesh with variable element size and, as a result, variable nodal (i.e., SPH particle) spacing. To further accelerate the simulations, the SPH code is ported to a graphics processing unit using the OpenACC standard. The implementation and simulation results are described and discussed in this paper.

  1. A new approach to fluid-structure interaction within graphics hardware accelerated smooth particle hydrodynamics considering heterogeneous particle size distribution

    NASA Astrophysics Data System (ADS)

    Eghtesad, Adnan; Knezevic, Marko

    2017-12-01

    A corrective smooth particle method (CSPM) within smooth particle hydrodynamics (SPH) is used to study the deformation of an aircraft structure under high-velocity water-ditching impact load. The CSPM-SPH method features a new approach for the prediction of two-way fluid-structure interaction coupling. Results indicate that the implementation is well suited for modeling the deformation of structures under high-velocity impact into water as evident from the predicted stress and strain localizations in the aircraft structure as well as the integrity of the impacted interfaces, which show no artificial particle penetrations. To reduce the simulation time, a heterogeneous particle size distribution over a complex three-dimensional geometry is used. The variable particle size is achieved from a finite element mesh with variable element size and, as a result, variable nodal (i.e., SPH particle) spacing. To further accelerate the simulations, the SPH code is ported to a graphics processing unit using the OpenACC standard. The implementation and simulation results are described and discussed in this paper.

  2. Cloud-based design of high average power traveling wave linacs

    NASA Astrophysics Data System (ADS)

    Kutsaev, S. V.; Eidelman, Y.; Bruhwiler, D. L.; Moeller, P.; Nagler, R.; Barbe Welzel, J.

    2017-12-01

    The design of industrial high average power traveling wave linacs must accurately consider some specific effects. For example, acceleration of high current beam reduces power flow in the accelerating waveguide. Space charge may influence the stability of longitudinal or transverse beam dynamics. Accurate treatment of beam loading is central to the design of high-power TW accelerators, and it is especially difficult to model in the meter-scale region where the electrons are nonrelativistic. Currently, there are two types of available codes: tracking codes (e.g. PARMELA or ASTRA) that cannot solve self-consistent problems, and particle-in-cell codes (e.g. Magic 3D or CST Particle Studio) that can model the physics correctly but are very time-consuming and resource-demanding. Hellweg is a special tool for quick and accurate electron dynamics simulation in traveling wave accelerating structures. The underlying theory of this software is based on the differential equations of motion. The effects considered in this code include beam loading, space charge forces, and external magnetic fields. We present the current capabilities of the code, provide benchmarking results, and discuss future plans. We also describe the browser-based GUI for executing Hellweg in the cloud.

  3. Improvement of absolute positioning of precision stage based on cooperation the zero position pulse signal and incremental displacement signal

    NASA Astrophysics Data System (ADS)

    Wang, H. H.; Shi, Y. P.; Li, X. H.; Ni, K.; Zhou, Q.; Wang, X. H.

    2018-03-01

    In this paper, a scheme to measure the position of precision stages, with a high precision, is presented. The encoder is composed of a scale grating and a compact two-probe reading head, to read the zero position pulse signal and continuous incremental displacement signal. The scale grating contains different codes, multiple reference codes with different spacing superimposed onto the incremental grooves with an equal spacing structure. The codes of reference mask in the reading head is the same with the reference codes on the scale grating, and generate pulse signal to locate the reference position primarily when the reading head moves along the scale grating. After locating the reference position in a section by means of the pulse signal, the reference position can be located precisely with the amplitude of the incremental displacement signal. A kind of reference codes and scale grating were designed, and experimental results show that the primary precision of the design achieved is 1 μ m. The period of the incremental signal is 1μ m, and 1000/N nm precision can be achieved by subdivide the incremental signal in N times.

  4. Projectile and Lab Frame Differential Cross Sections for Electromagnetic Dissociation

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Adamczyk, Anne; Dick, Frank

    2008-01-01

    Differential cross sections for electromagnetic dissociation in nuclear collisions are calculated for the first time. In order to be useful for three - dimensional transport codes, these cross sections have been calculated in both the projectile and lab frames. The formulas for these cross sections are such that they can be immediately used in space radiation transport codes. Only a limited amount of data exists, but the comparison between theory and experiment is good.

  5. The effect of structural design parameters on FPGA-based feed-forward space-time trellis coding-orthogonal frequency division multiplexing channel encoders

    NASA Astrophysics Data System (ADS)

    Passas, Georgios; Freear, Steven; Fawcett, Darren

    2010-08-01

    Orthogonal frequency division multiplexing (OFDM)-based feed-forward space-time trellis code (FFSTTC) encoders can be synthesised as very high speed integrated circuit hardware description language (VHDL) designs. Evaluation of their FPGA implementation can lead to conclusions that help a designer to decide the optimum implementation, given the encoder structural parameters. VLSI architectures based on 1-bit multipliers and look-up tables (LUTs) are compared in terms of FPGA slices and block RAMs (area), as well as in terms of minimum clock period (speed). Area and speed graphs versus encoder memory order are provided for quadrature phase shift keying (QPSK) and 8 phase shift keying (8-PSK) modulation and two transmit antennas, revealing best implementation under these conditions. The effect of number of modulation bits and transmit antennas on the encoder implementation complexity is also investigated.

  6. Cross-Layer Design for Space-Time coded MIMO Systems over Rice Fading Channel

    NASA Astrophysics Data System (ADS)

    Yu, Xiangbin; Zhou, Tingting; Liu, Xiaoshuai; Yin, Xin

    A cross-layer design (CLD) scheme for space-time coded MIMO systems over Rice fading channel is presented by combining adaptive modulation and automatic repeat request, and the corresponding system performance is investigated well. The fading gain switching thresholds subject to a target packet error rate (PER) and fixed power constraint are derived. According to these results, and using the generalized Marcum Q-function, the calculation formulae of the average spectrum efficiency (SE) and PER of the system with CLD are derived. As a result, closed-form expressions for average SE and PER are obtained. These expressions include some existing expressions in Rayleigh channel as special cases. With these expressions, the system performance in Rice fading channel is evaluated effectively. Numerical results verify the validity of the theoretical analysis. The results show that the system performance in Rice channel is effectively improved as Rice factor increases, and outperforms that in Rayleigh channel.

  7. Investigation of adaptive filtering and MDL mitigation based on space-time block-coding for spatial division multiplexed coherent receivers

    NASA Astrophysics Data System (ADS)

    Weng, Yi; He, Xuan; Yao, Wang; Pacheco, Michelle C.; Wang, Junyi; Pan, Zhongqi

    2017-07-01

    In this paper, we explored the performance of space-time block-coding (STBC) assisted multiple-input multiple-output (MIMO) scheme for modal dispersion and mode-dependent loss (MDL) mitigation in spatial-division multiplexed optical communication systems, whereas the weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive least squares (RLS) algorithm for convergence and channel estimation. The proposed STBC-RLS algorithm can achieve 43.6% enhancement on convergence rate over conventional least mean squares (LMS) for quadrature phase-shift keying (QPSK) signals with merely 16.2% increase in hardware complexity. The overall optical signal to noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16-quadrature amplitude modulation (QAM) and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE).

  8. The Chandra Monitoring System

    NASA Astrophysics Data System (ADS)

    Wolk, S. J.; Petreshock, J. G.; Allen, P.; Bartholowmew, R. T.; Isobe, T.; Cresitello-Dittmar, M.; Dewey, D.

    The NASA Great Observatory Chandra was launched July 23, 1999 aboard the space shuttle Columbia. The Chandra Science Center (CXC) runs a monitoring and trends analysis program to maximize the science return from this mission. At the time of the launch, the monitoring portion of this system was in place. The system is a collection of multiple threads and programming methodologies acting cohesively. Real-time data are passed to the CXC. Our real-time tool, ACORN (A Comprehensive object-ORiented Necessity), performs limit checking of performance related hardware. Chandra is in ground contact less than 3 hours a day, so the bulk of the monitoring must take place on data dumped by the spacecraft. To do this, we have written several tools which run off of the CXC data system pipelines. MTA_MONITOR_STATIC, limit checks FITS files containing hardware data. MTA_EVENT_MON and MTA_GRAT_MON create quick look data for the focal place instruments and the transmission gratings. When instruments violate their operational limits, the responsible scientists are notified by email and problem tracking is initiated. Output from all these codes is distributed to CXC scientists via HTML interface.

  9. Spatial coding-based approach for partitioning big spatial data in Hadoop

    NASA Astrophysics Data System (ADS)

    Yao, Xiaochuang; Mokbel, Mohamed F.; Alarabi, Louai; Eldawy, Ahmed; Yang, Jianyu; Yun, Wenju; Li, Lin; Ye, Sijing; Zhu, Dehai

    2017-09-01

    Spatial data partitioning (SDP) plays a powerful role in distributed storage and parallel computing for spatial data. However, due to skew distribution of spatial data and varying volume of spatial vector objects, it leads to a significant challenge to ensure both optimal performance of spatial operation and data balance in the cluster. To tackle this problem, we proposed a spatial coding-based approach for partitioning big spatial data in Hadoop. This approach, firstly, compressed the whole big spatial data based on spatial coding matrix to create a sensing information set (SIS), including spatial code, size, count and other information. SIS was then employed to build spatial partitioning matrix, which was used to spilt all spatial objects into different partitions in the cluster finally. Based on our approach, the neighbouring spatial objects can be partitioned into the same block. At the same time, it also can minimize the data skew in Hadoop distributed file system (HDFS). The presented approach with a case study in this paper is compared against random sampling based partitioning, with three measurement standards, namely, the spatial index quality, data skew in HDFS, and range query performance. The experimental results show that our method based on spatial coding technique can improve the query performance of big spatial data, as well as the data balance in HDFS. We implemented and deployed this approach in Hadoop, and it is also able to support efficiently any other distributed big spatial data systems.

  10. Space Domain Awareness

    DTIC Science & Technology

    2012-09-01

    the Space Surveillance Network has been tracking orbital objects and maintaining a catalog that allows space operators to safely operate satellites ...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...Distribution Unlimited) backward) in time , but the accuracy degrades as the amount of propagation time increases. Thus, the need to maintain a

  11. Channel coding in the space station data system network

    NASA Technical Reports Server (NTRS)

    Healy, T.

    1982-01-01

    A detailed discussion of the use of channel coding for error correction, privacy/secrecy, channel separation, and synchronization is presented. Channel coding, in one form or another, is an established and common element in data systems. No analysis and design of a major new system would fail to consider ways in which channel coding could make the system more effective. The presence of channel coding on TDRS, Shuttle, the Advanced Communication Technology Satellite Program system, the JSC-proposed Space Operations Center, and the proposed 30/20 GHz Satellite Communication System strongly support the requirement for the utilization of coding for the communications channel. The designers of the space station data system have to consider the use of channel coding.

  12. Cellular Automata-Based Application for Driver Assistance in Indoor Parking Areas.

    PubMed

    Caballero-Gil, Cándido; Caballero-Gil, Pino; Molina-Gil, Jezabel

    2016-11-15

    This work proposes an adaptive recommendation mechanism for smart parking that takes advantage of the popularity of smartphones and the rise of the Internet of Things. The proposal includes a centralized system to forecast available indoor parking spaces, and a low-cost mobile application to obtain data of actual and predicted parking occupancy. The described scheme uses data from both sources bidirectionally so that the centralized forecast system is fed with data obtained with the distributed system based on smartphones, and vice versa. The mobile application uses different wireless technologies to provide the forecast system with actual parking data and receive from the system useful recommendations about where to park. Thus, the proposal can be used by any driver to easily find available parking spaces in indoor facilities. The client software developed for smartphones is a lightweight Android application that supplies precise indoor positioning systems based on Quick Response codes or Near Field Communication tags, and semi-precise indoor positioning systems based on Bluetooth Low Energy beacons. The performance of the proposed approach has been evaluated by conducting computer simulations and real experimentation with a preliminary implementation. The results have shown the strengths of the proposal in the reduction of the time and energy costs to find available parking spaces.

  13. Cellular Automata-Based Application for Driver Assistance in Indoor Parking Areas †

    PubMed Central

    Caballero-Gil, Cándido; Caballero-Gil, Pino; Molina-Gil, Jezabel

    2016-01-01

    This work proposes an adaptive recommendation mechanism for smart parking that takes advantage of the popularity of smartphones and the rise of the Internet of Things. The proposal includes a centralized system to forecast available indoor parking spaces, and a low-cost mobile application to obtain data of actual and predicted parking occupancy. The described scheme uses data from both sources bidirectionally so that the centralized forecast system is fed with data obtained with the distributed system based on smartphones, and vice versa. The mobile application uses different wireless technologies to provide the forecast system with actual parking data and receive from the system useful recommendations about where to park. Thus, the proposal can be used by any driver to easily find available parking spaces in indoor facilities. The client software developed for smartphones is a lightweight Android application that supplies precise indoor positioning systems based on Quick Response codes or Near Field Communication tags, and semi-precise indoor positioning systems based on Bluetooth Low Energy beacons. The performance of the proposed approach has been evaluated by conducting computer simulations and real experimentation with a preliminary implementation. The results have shown the strengths of the proposal in the reduction of the time and energy costs to find available parking spaces. PMID:27854282

  14. Largescale Long-term particle Simulations of Runaway electrons in Tokamaks

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Qin, Hong; Wang, Yulei

    2016-10-01

    To understand runaway dynamical behavior is crucial to assess the safety of tokamaks. Though many important analytical and numerical results have been achieved, the overall dynamic behaviors of runaway electrons in a realistic tokamak configuration is still rather vague. In this work, the secular full-orbit simulations of runaway electrons are carried out based on a relativistic volume-preserving algorithm. Detailed phase-space behaviors of runaway electrons are investigated in different timescales spanning 11 orders. A detailed analysis of the collisionless neoclassical scattering is provided when considering the coupling between the rotation of momentum vector and the background field. In large timescale, the initial condition of runaway electrons in phase space globally influences the runaway distribution. It is discovered that parameters and field configuration of tokamaks can modify the runaway electron dynamics significantly. Simulations on 10 million cores of supercomputer using the APT code have been completed. A resolution of 107 in phase space is used, and simulations are performed for 1011 time steps. Largescale simulations show that in a realistic fusion reactor, the concern of runaway electrons is not as serious as previously thought. This research was supported by National Magnetic Connement Fusion Energy Research Project (2015GB111003, 2014GB124005), the National Natural Science Foundation of China (NSFC-11575185, 11575186) and the GeoAlgorithmic Plasma Simulator (GAPS) Project.

  15. Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.

    PubMed

    Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang

    2017-11-01

    Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.

  16. Demonstration of a Near and Mid-Infrared Detector Using Multiple Step Quantum Wells

    DTIC Science & Technology

    2003-09-01

    MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited. DEMONSTRATION OF A NEAR AND MID-INFRARED... Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction...AVAILABILITY STATEMENT Approved for public release; distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) In this

  17. Statistical analysis of flight times for space shuttle ferry flights

    NASA Technical Reports Server (NTRS)

    Graves, M. E.; Perlmutter, M.

    1974-01-01

    Markov chain and Monte Carlo analysis techniques are applied to the simulated Space Shuttle Orbiter Ferry flights to obtain statistical distributions of flight time duration between Edwards Air Force Base and Kennedy Space Center. The two methods are compared, and are found to be in excellent agreement. The flights are subjected to certain operational and meteorological requirements, or constraints, which cause eastbound and westbound trips to yield different results. Persistence of events theory is applied to the occurrence of inclement conditions to find their effect upon the statistical flight time distribution. In a sensitivity test, some of the constraints are varied to observe the corresponding changes in the results.

  18. Advancing Underwater Acoustic Communication for Autonomous Distributed Networks via Sparse Channel Sensing, Coding, and Navigation Support

    DTIC Science & Technology

    2014-09-30

    underwater acoustic communication technologies for autonomous distributed underwater networks , through innovative signal processing, coding, and...4. TITLE AND SUBTITLE Advancing Underwater Acoustic Communication for Autonomous Distributed Networks via Sparse Channel Sensing, Coding, and...coding: 3) OFDM modulated dynamic coded cooperation in underwater acoustic channels; 3 Localization, Networking , and Testbed: 4) On-demand

  19. 14 CFR 1214.403 - Code of Conduct for the International Space Station Crew.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... responsible for directing the mission. A Flight Director will be in charge of directing real-time ISS... advance of the mission and are designed to minimize the amount of real-time discussion required during... and impose disciplinary measures. (4) “ETOV” means Earth-to-Orbit Vehicle travelling between Earth and...

  20. Relativistic initial conditions for N-body simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fidler, Christian; Tram, Thomas; Crittenden, Robert

    2017-06-01

    Initial conditions for (Newtonian) cosmological N-body simulations are usually set by re-scaling the present-day power spectrum obtained from linear (relativistic) Boltzmann codes to the desired initial redshift of the simulation. This back-scaling method can account for the effect of inhomogeneous residual thermal radiation at early times, which is absent in the Newtonian simulations. We analyse this procedure from a fully relativistic perspective, employing the recently-proposed Newtonian motion gauge framework. We find that N-body simulations for ΛCDM cosmology starting from back-scaled initial conditions can be self-consistently embedded in a relativistic space-time with first-order metric potentials calculated using a linear Boltzmann code.more » This space-time coincides with a simple ''N-body gauge'' for z < 50 for all observable modes. Care must be taken, however, when simulating non-standard cosmologies. As an example, we analyse the back-scaling method in a cosmology with decaying dark matter, and show that metric perturbations become large at early times in the back-scaling approach, indicating a breakdown of the perturbative description. We suggest a suitable ''forwards approach' for such cases.« less

  1. Web Application to Monitor Logistics Distribution of Disaster Relief Using the CodeIgniter Framework

    NASA Astrophysics Data System (ADS)

    Jamil, Mohamad; Ridwan Lessy, Mohamad

    2018-03-01

    Disaster management is the responsibility of the central government and local governments. The principles of disaster management, among others, are quick and precise, priorities, coordination and cohesion, efficient and effective manner. Help that is needed by most societies are logistical assistance, such as the assistance covers people’s everyday needs, such as food, instant noodles, fast food, blankets, mattresses etc. Logistical assistance is needed for disaster management, especially in times of disasters. The support of logistical assistance must be timely, to the right location, target, quality, quantity, and needs. The purpose of this study is to make a web application to monitorlogistics distribution of disaster relefusing CodeIgniter framework. Through this application, the mechanisms of aid delivery will be easily controlled from and heading to the disaster site.

  2. Data compression and genomes: a two-dimensional life domain map.

    PubMed

    Menconi, Giulia; Benci, Vieri; Buiatti, Marcello

    2008-07-21

    We define the complexity of DNA sequences as the information content per nucleotide, calculated by means of some Lempel-Ziv data compression algorithm. It is possible to use the statistics of the complexity values of the functional regions of different complete genomes to distinguish among genomes of different domains of life (Archaea, Bacteria and Eukarya). We shall focus on the distribution function of the complexity of non-coding regions. We show that the three domains may be plotted in separate regions within the two-dimensional space where the axes are the skewness coefficient and the curtosis coefficient of the aforementioned distribution. Preliminary results on 15 genomes are introduced.

  3. CoSMoMVPA: Multi-Modal Multivariate Pattern Analysis of Neuroimaging Data in Matlab/GNU Octave.

    PubMed

    Oosterhof, Nikolaas N; Connolly, Andrew C; Haxby, James V

    2016-01-01

    Recent years have seen an increase in the popularity of multivariate pattern (MVP) analysis of functional magnetic resonance (fMRI) data, and, to a much lesser extent, magneto- and electro-encephalography (M/EEG) data. We present CoSMoMVPA, a lightweight MVPA (MVP analysis) toolbox implemented in the intersection of the Matlab and GNU Octave languages, that treats both fMRI and M/EEG data as first-class citizens. CoSMoMVPA supports all state-of-the-art MVP analysis techniques, including searchlight analyses, classification, correlations, representational similarity analysis, and the time generalization method. These can be used to address both data-driven and hypothesis-driven questions about neural organization and representations, both within and across: space, time, frequency bands, neuroimaging modalities, individuals, and species. It uses a uniform data representation of fMRI data in the volume or on the surface, and of M/EEG data at the sensor and source level. Through various external toolboxes, it directly supports reading and writing a variety of fMRI and M/EEG neuroimaging formats, and, where applicable, can convert between them. As a result, it can be integrated readily in existing pipelines and used with existing preprocessed datasets. CoSMoMVPA overloads the traditional volumetric searchlight concept to support neighborhoods for M/EEG and surface-based fMRI data, which supports localization of multivariate effects of interest across space, time, and frequency dimensions. CoSMoMVPA also provides a generalized approach to multiple comparison correction across these dimensions using Threshold-Free Cluster Enhancement with state-of-the-art clustering and permutation techniques. CoSMoMVPA is highly modular and uses abstractions to provide a uniform interface for a variety of MVP measures. Typical analyses require a few lines of code, making it accessible to beginner users. At the same time, expert programmers can easily extend its functionality. CoSMoMVPA comes with extensive documentation, including a variety of runnable demonstration scripts and analysis exercises (with example data and solutions). It uses best software engineering practices including version control, distributed development, an automated test suite, and continuous integration testing. It can be used with the proprietary Matlab and the free GNU Octave software, and it complies with open source distribution platforms such as NeuroDebian. CoSMoMVPA is Free/Open Source Software under the permissive MIT license. Website: http://cosmomvpa.org Source code: https://github.com/CoSMoMVPA/CoSMoMVPA.

  4. CoSMoMVPA: Multi-Modal Multivariate Pattern Analysis of Neuroimaging Data in Matlab/GNU Octave

    PubMed Central

    Oosterhof, Nikolaas N.; Connolly, Andrew C.; Haxby, James V.

    2016-01-01

    Recent years have seen an increase in the popularity of multivariate pattern (MVP) analysis of functional magnetic resonance (fMRI) data, and, to a much lesser extent, magneto- and electro-encephalography (M/EEG) data. We present CoSMoMVPA, a lightweight MVPA (MVP analysis) toolbox implemented in the intersection of the Matlab and GNU Octave languages, that treats both fMRI and M/EEG data as first-class citizens. CoSMoMVPA supports all state-of-the-art MVP analysis techniques, including searchlight analyses, classification, correlations, representational similarity analysis, and the time generalization method. These can be used to address both data-driven and hypothesis-driven questions about neural organization and representations, both within and across: space, time, frequency bands, neuroimaging modalities, individuals, and species. It uses a uniform data representation of fMRI data in the volume or on the surface, and of M/EEG data at the sensor and source level. Through various external toolboxes, it directly supports reading and writing a variety of fMRI and M/EEG neuroimaging formats, and, where applicable, can convert between them. As a result, it can be integrated readily in existing pipelines and used with existing preprocessed datasets. CoSMoMVPA overloads the traditional volumetric searchlight concept to support neighborhoods for M/EEG and surface-based fMRI data, which supports localization of multivariate effects of interest across space, time, and frequency dimensions. CoSMoMVPA also provides a generalized approach to multiple comparison correction across these dimensions using Threshold-Free Cluster Enhancement with state-of-the-art clustering and permutation techniques. CoSMoMVPA is highly modular and uses abstractions to provide a uniform interface for a variety of MVP measures. Typical analyses require a few lines of code, making it accessible to beginner users. At the same time, expert programmers can easily extend its functionality. CoSMoMVPA comes with extensive documentation, including a variety of runnable demonstration scripts and analysis exercises (with example data and solutions). It uses best software engineering practices including version control, distributed development, an automated test suite, and continuous integration testing. It can be used with the proprietary Matlab and the free GNU Octave software, and it complies with open source distribution platforms such as NeuroDebian. CoSMoMVPA is Free/Open Source Software under the permissive MIT license. Website: http://cosmomvpa.org Source code: https://github.com/CoSMoMVPA/CoSMoMVPA PMID:27499741

  5. Nowcasting Ground Magnetic Perturbations with the Space Weather Modeling Framework

    NASA Astrophysics Data System (ADS)

    Welling, D. T.; Toth, G.; Singer, H. J.; Millward, G. H.; Gombosi, T. I.

    2015-12-01

    Predicting ground-based magnetic perturbations is a critical step towards specifying and predicting geomagnetically induced currents (GICs) in high voltage transmission lines. Currently, the Space Weather Modeling Framework (SWMF), a flexible modeling framework for simulating the multi-scale space environment, is being transitioned from research to operational use (R2O) by NOAA's Space Weather Prediction Center. Upon completion of this transition, the SWMF will provide localized B/t predictions using real-time solar wind observations from L1 and the F10.7 proxy for EUV as model input. This presentation describes the operational SWMF setup and summarizes the changes made to the code to enable R2O progress. The framework's algorithm for calculating ground-based magnetometer observations will be reviewed. Metrics from data-model comparisons will be reviewed to illustrate predictive capabilities. Early data products, such as regional-K index and grids of virtual magnetometer stations, will be presented. Finally, early successes will be shared, including the code's ability to reproduce the recent March 2015 St. Patrick's Day Storm.

  6. No cataclysmic variables missing: higher merger rate brings into agreement observed and predicted space densities

    NASA Astrophysics Data System (ADS)

    Belloni, Diogo; Schreiber, Matthias R.; Zorotovic, Mónica; Iłkiewicz, Krystian; Hurley, Jarrod R.; Giersz, Mirek; Lagos, Felipe

    2018-06-01

    The predicted and observed space density of cataclysmic variables (CVs) have been for a long time discrepant by at least an order of magnitude. The standard model of CV evolution predicts that the vast majority of CVs should be period bouncers, whose space density has been recently measured to be ρ ≲ 2 × 10-5 pc-3. We performed population synthesis of CVs using an updated version of the Binary Stellar Evolution (BSE) code for single and binary star evolution. We find that the recently suggested empirical prescription of consequential angular momentum loss (CAML) brings into agreement predicted and observed space densities of CVs and period bouncers. To progress with our understanding of CV evolution it is crucial to understand the physical mechanism behind empirical CAML. Our changes to the BSE code are also provided in details, which will allow the community to accurately model mass transfer in interacting binaries in which degenerate objects accrete from low-mass main-sequence donor stars.

  7. Two-dimensional T2 distribution mapping in rock core plugs with optimal k-space sampling.

    PubMed

    Xiao, Dan; Balcom, Bruce J

    2012-07-01

    Spin-echo single point imaging has been employed for 1D T(2) distribution mapping, but a simple extension to 2D is challenging since the time increase is n fold, where n is the number of pixels in the second dimension. Nevertheless 2D T(2) mapping in fluid saturated rock core plugs is highly desirable because the bedding plane structure in rocks often results in different pore properties within the sample. The acquisition time can be improved by undersampling k-space. The cylindrical shape of rock core plugs yields well defined intensity distributions in k-space that may be efficiently determined by new k-space sampling patterns that are developed in this work. These patterns acquire 22.2% and 11.7% of the k-space data points. Companion density images may be employed, in a keyhole imaging sense, to improve image quality. T(2) weighted images are fit to extract T(2) distributions, pixel by pixel, employing an inverse Laplace transform. Images reconstructed with compressed sensing, with similar acceleration factors, are also presented. The results show that restricted k-space sampling, in this application, provides high quality results. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Massively parallel data processing for quantitative total flow imaging with optical coherence microscopy and tomography

    NASA Astrophysics Data System (ADS)

    Sylwestrzak, Marcin; Szlag, Daniel; Marchand, Paul J.; Kumar, Ashwin S.; Lasser, Theo

    2017-08-01

    We present an application of massively parallel processing of quantitative flow measurements data acquired using spectral optical coherence microscopy (SOCM). The need for massive signal processing of these particular datasets has been a major hurdle for many applications based on SOCM. In view of this difficulty, we implemented and adapted quantitative total flow estimation algorithms on graphics processing units (GPU) and achieved a 150 fold reduction in processing time when compared to a former CPU implementation. As SOCM constitutes the microscopy counterpart to spectral optical coherence tomography (SOCT), the developed processing procedure can be applied to both imaging modalities. We present the developed DLL library integrated in MATLAB (with an example) and have included the source code for adaptations and future improvements. Catalogue identifier: AFBT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFBT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPLv3 No. of lines in distributed program, including test data, etc.: 913552 No. of bytes in distributed program, including test data, etc.: 270876249 Distribution format: tar.gz Programming language: CUDA/C, MATLAB. Computer: Intel x64 CPU, GPU supporting CUDA technology. Operating system: 64-bit Windows 7 Professional. Has the code been vectorized or parallelized?: Yes, CPU code has been vectorized in MATLAB, CUDA code has been parallelized. RAM: Dependent on users parameters, typically between several gigabytes and several tens of gigabytes Classification: 6.5, 18. Nature of problem: Speed up of data processing in optical coherence microscopy Solution method: Utilization of GPU for massively parallel data processing Additional comments: Compiled DLL library with source code and documentation, example of utilization (MATLAB script with raw data) Running time: 1,8 s for one B-scan (150 × faster in comparison to the CPU data processing time)

  9. High-speed reacting flow simulation using USA-series codes

    NASA Astrophysics Data System (ADS)

    Chakravarthy, S. R.; Palaniswamy, S.

    In this paper, the finite-rate chemistry (FRC) formulation for the USA-series of codes and three sets of validations are presented. USA-series computational fluid dynamics (CFD) codes are based on Unified Solution Algorithms including explicity and implicit formulations, factorization and relaxation approaches, time marching and space marching methodolgies, etc., in order to be able to solve a very wide class of CDF problems using a single framework. Euler or Navier-Stokes equations are solved using a finite-volume treatment with upwind Total Variation Diminishing discretization for the inviscid terms. Perfect and real gas options are available including equilibrium and nonequilibrium chemistry. This capability has been widely used to study various problems including Space Shuttle exhaust plumes, National Aerospace Plane (NASP) designs, etc. (1) Numerical solutions are presented showing the full range of possible solutions to steady detonation wave problems. (2) Comparison between the solution obtained by the USA code and Generalized Kinetics Analysis Program (GKAP) is shown for supersonic combustion in a duct. (3) Simulation of combustion in a supersonic shear layer is shown to have reasonable agreement with experimental observations.

  10. Space Weather Nowcasting of Atmospheric Ionizing Radiation for Aviation Safety

    NASA Technical Reports Server (NTRS)

    Mertens, Christopher J.; Wilson, John W.; Blattnig, Steve R.; Solomon, Stan C.; Wiltberger, J.; Kunches, Joseph; Kress, Brian T.; Murray, John J.

    2007-01-01

    There is a growing concern for the health and safety of commercial aircrew and passengers due to their exposure to ionizing radiation with high linear energy transfer (LET), particularly at high latitudes. The International Commission of Radiobiological Protection (ICRP), the EPA, and the FAA consider the crews of commercial aircraft as radiation workers. During solar energetic particle (SEP) events, radiation exposure can exceed annual limits, and the number of serious health effects is expected to be quite high if precautions are not taken. There is a need for a capability to monitor the real-time, global background radiations levels, from galactic cosmic rays (GCR), at commercial airline altitudes and to provide analytical input for airline operations decisions for altering flight paths and altitudes for the mitigation and reduction of radiation exposure levels during a SEP event. The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) model is new initiative to provide a global, real-time radiation dosimetry package for archiving and assessing the biologically harmful radiation exposure levels at commercial airline altitudes. The NAIRAS model brings to bear the best available suite of Sun-Earth observations and models for simulating the atmospheric ionizing radiation environment. Observations are utilized from ground (neutron monitors), from the atmosphere (the METO analysis), and from space (NASA/ACE and NOAA/GOES). Atmospheric observations provide the overhead shielding information and the ground- and space-based observations provide boundary conditions on the GCR and SEP energy flux distributions for transport and dosimetry simulations. Dose rates are calculated using the parametric AIR (Atmospheric Ionizing Radiation) model and the physics-based HZETRN (High Charge and Energy Transport) code. Empirical models of the near-Earth radiation environment (GCR/SEP energy flux distributions and geomagnetic cut-off rigidity) are benchmarked against the physics-based CMIT (Coupled Magnetosphere- Ionosphere-Thermosphere) and SEP-trajectory models.

  11. A thermal NO(x) prediction model - Scalar computation module for CFD codes with fluid and kinetic effects

    NASA Technical Reports Server (NTRS)

    Mcbeath, Giorgio; Ghorashi, Bahman; Chun, Kue

    1993-01-01

    A thermal NO(x) prediction model is developed to interface with a CFD, k-epsilon based code. A converged solution from the CFD code is the input to the postprocessing model for prediction of thermal NO(x). The model uses a decoupled analysis to estimate the equilibrium level of (NO(x))e which is the constant rate limit. This value is used to estimate the flame (NO(x)) and in turn predict the rate of formation at each node using a two-step Zeldovich mechanism. The rate is fixed on the NO(x) production rate plot by estimating the time to reach equilibrium by a differential analysis based on the reaction: O + N2 = NO + N. The rate is integrated in the nonequilibrium time space based on the residence time at each node in the computational domain. The sum of all nodal predictions yields the total NO(x) level.

  12. Magnetic Flux Rope Identification and Characterization from Observationally Driven Solar Coronal Models

    NASA Astrophysics Data System (ADS)

    Lowder, Chris; Yeates, Anthony

    2017-09-01

    Formed through magnetic field shearing and reconnection in the solar corona, magnetic flux ropes are structures of twisted magnetic field, threaded along an axis. Their evolution and potential eruption are of great importance for space weather. Here we describe a new methodology for the automated detection of flux ropes in simulated magnetic fields, utilizing field-line helicity. Our Flux Rope Detection and Organization (FRoDO) code, which measures the magnetic flux and helicity content of pre-erupting flux ropes over time, as well as detecting eruptions, is publicly available. As a first demonstration, the code is applied to the output from a time-dependent magnetofrictional model, spanning 1996 June 15-2014 February 10. Over this period, 1561 erupting and 2099 non-erupting magnetic flux ropes are detected, tracked, and characterized. For this particular model data, erupting flux ropes have a mean net helicity magnitude of 2.66× {10}43 Mx2, while non-erupting flux ropes have a significantly lower mean of 4.04× {10}42 Mx2, although there is overlap between the two distributions. Similarly, the mean unsigned magnetic flux for erupting flux ropes is 4.04× {10}21 Mx, significantly higher than the mean value of 7.05× {10}20 Mx for non-erupting ropes. These values for erupting flux ropes are within the broad range expected from observational and theoretical estimates, although the eruption rate in this particular model is lower than that of observed coronal mass ejections. In the future, the FRoDO code will prove to be a valuable tool for assessing the performance of different non-potential coronal simulations and comparing them with observations.

  13. The island of time: yélî dnye, the language of rossel island.

    PubMed

    Levinson, Stephen C; Majid, Asifa

    2013-01-01

    This paper describes the linguistic description of time, the accompanying gestural system, and the "mental time lines" found in the speakers of Yélî Dnye, an isolate language spoken offshore from Papua New Guinea. Like many indigenous languages, Yélî Dnye has no fixed anchoring of time and thus no calendrical time. Instead, time in Yélî Dnye linguistic description is primarily anchored to the time of speaking, with six diurnal tenses and special nominals for n days from coding time; this is supplemented with special constructions for overlapping events. Consequently there is relatively little cross-over or metaphor from space to time. The gesture system, on the other hand, uses pointing to sun position to indicate time of day and may make use of systematic time lines. Experimental evidence fails to show a single robust axis used for mapping time to space. This suggests that there may not be a strong, universal tendency for systematic space-time mappings.

  14. Coordinated design of coding and modulation systems

    NASA Technical Reports Server (NTRS)

    Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.

    1976-01-01

    The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.

  15. [Research of preferences and security management of tourists in Poyang Lake based on schistosomiasis prevention].

    PubMed

    Feng, Shu-hua

    2015-04-01

    To discuss the prevention of schistosomiasis in tourism of lake region. The seasonal distribution of tourism activities and spatial distribution of scenic spots, as well as the coupling between space and temporal of Oncomelania snail distribution and the transmission time of schistosomiasis in Poyang Lake region were analyzed. The travel preference of schistosomiasis susceptible population was surveyed by questionnaires and interviews. There were couplings of space and temporal between tourism activities in Poyang Lake region and transmission time of schistosomiasis as well as space distribution of snails, respectively. The most popular tourism items were Shuishangrenjia (overwater household) and fishing folk culture with property of participation and experience. The suggestion is to establish health records of tourists, carry out health education of schistosomiasis, and enhance the management of tourism and activities of tourists.

  16. Automated collection and processing of environmental samples

    DOEpatents

    Troyer, Gary L.; McNeece, Susan G.; Brayton, Darryl D.; Panesar, Amardip K.

    1997-01-01

    For monitoring an environmental parameter such as the level of nuclear radiation, at distributed sites, bar coded sample collectors are deployed and their codes are read using a portable data entry unit that also records the time of deployment. The time and collector identity are cross referenced in memory in the portable unit. Similarly, when later recovering the collector for testing, the code is again read and the time of collection is stored as indexed to the sample collector, or to a further bar code, for example as provided on a container for the sample. The identity of the operator can also be encoded and stored. After deploying and/or recovering the sample collectors, the data is transmitted to a base processor. The samples are tested, preferably using a test unit coupled to the base processor, and again the time is recorded. The base processor computes the level of radiation at the site during exposure of the sample collector, using the detected radiation level of the sample, the delay between recovery and testing, the duration of exposure and the half life of the isotopes collected. In one embodiment, an identity code and a site code are optically read by an image grabber coupled to the portable data entry unit.

  17. Limitations to the use of two-dimensional thermal modeling of a nuclear waste repository

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, B.W.

    1979-01-04

    Thermal modeling of a nuclear waste repository is basic to most waste management predictive models. It is important that the modeling techniques accurately determine the time-dependent temperature distribution of the waste emplacement media. Recent modeling studies show that the time-dependent temperature distribution can be accurately modeled in the far-field using a 2-dimensional (2-D) planar numerical model; however, the near-field cannot be modeled accurately enough by either 2-D axisymmetric or 2-D planar numerical models for repositories in salt. The accuracy limits of 2-D modeling were defined by comparing results from 3-dimensional (3-D) TRUMP modeling with results from both 2-D axisymmetric andmore » 2-D planar. Both TRUMP and ADINAT were employed as modeling tools. Two-dimensional results from the finite element code, ADINAT were compared with 2-D results from the finite difference code, TRUMP; they showed almost perfect correspondence in the far-field. This result adds substantially to confidence in future use of ADINAT and its companion stress code ADINA for thermal stress analysis. ADINAT was found to be somewhat sensitive to time step and mesh aspect ratio. 13 figures, 4 tables.« less

  18. Seismic Velocity Structure of the San Jacinto Fault Zone from Double-Difference Tomography and Expected Distribution of Head Waves

    NASA Astrophysics Data System (ADS)

    Allam, A. A.; Ben-Zion, Y.

    2010-12-01

    We present initial results of double-difference tomographic images for the velocity structure of the San Jacinto Fault Zone (SJFZ), and related 3D forward calculations of waves in the immediate vicinity of the SJFZ. We begin by discretizing the SJFZ region with a uniform grid spacing of 500 m, extending 140 km by 80 km and down to 25 km depth. We adopt the layered 1D model of Dreger & Helmberger (1993) as a starting model for this region, and invert for 3D distributions of VP and VS with the double-difference tomography of Zhang & Thurber (2003), which makes use of absolute event-station travel times as well as relative travel times for phases from nearby event pairs. Absolute arrival times of over 78,000 P- and S-wave phase picks generated by 1127 earthquakes and recorded at 70 stations near the SJFZ are used. Only data from events with Mw greater than 2.2 are used. Though ray coverage is limited at shallow depths, we obtain relatively high-resolution images from 4 to 13 km which show a clear contrast in velocity across the NW section of the SJFZ. To the SE, in the so-called trifurcation area, the structure is more complicated, though station coverage is poorest in this region. Using the obtained image, the current event locations, and the 3D finite-difference code of Olsen (1994), we estimate the likely distributions of fault zone head waves as a tool for future deployment of instrument. We plan to conduct further studies by including more travel time picks, including those from newly-deployed stations in the SJFZ area, in order to gain a more accurate image of the velocity structure.

  19. A vectorized code for calculating laminar and turbulent hypersonic flows about blunt axisymmetric bodies at zero and small angles of attack

    NASA Technical Reports Server (NTRS)

    Kumar, A.; Graves, R. A., Jr.

    1980-01-01

    A user's guide is provided for a computer code which calculates the laminar and turbulent hypersonic flows about blunt axisymmetric bodies, such as spherically blunted cones, hyperboloids, etc., at zero and small angles of attack. The code is written in STAR FORTRAN language for the CDC-STAR-100 computer. Time-dependent, viscous-shock-layer-type equations are used to describe the flow field. These equations are solved by an explicit, two-step, time asymptotic, finite-difference method. For the turbulent flow, a two-layer, eddy-viscosity model is used. The code provides complete flow-field properties including shock location, surface pressure distribution, surface heating rates, and skin-friction coefficients. This report contains descriptions of the input and output, the listing of the program, and a sample flow-field solution.

  20. Communications and information research: Improved space link performance via concatenated forward error correction coding

    NASA Technical Reports Server (NTRS)

    Rao, T. R. N.; Seetharaman, G.; Feng, G. L.

    1996-01-01

    With the development of new advanced instruments for remote sensing applications, sensor data will be generated at a rate that not only requires increased onboard processing and storage capability, but imposes demands on the space to ground communication link and ground data management-communication system. Data compression and error control codes provide viable means to alleviate these demands. Two types of data compression have been studied by many researchers in the area of information theory: a lossless technique that guarantees full reconstruction of the data, and a lossy technique which generally gives higher data compaction ratio but incurs some distortion in the reconstructed data. To satisfy the many science disciplines which NASA supports, lossless data compression becomes a primary focus for the technology development. While transmitting the data obtained by any lossless data compression, it is very important to use some error-control code. For a long time, convolutional codes have been widely used in satellite telecommunications. To more efficiently transform the data obtained by the Rice algorithm, it is required to meet the a posteriori probability (APP) for each decoded bit. A relevant algorithm for this purpose has been proposed which minimizes the bit error probability in the decoding linear block and convolutional codes and meets the APP for each decoded bit. However, recent results on iterative decoding of 'Turbo codes', turn conventional wisdom on its head and suggest fundamentally new techniques. During the past several months of this research, the following approaches have been developed: (1) a new lossless data compression algorithm, which is much better than the extended Rice algorithm for various types of sensor data, (2) a new approach to determine the generalized Hamming weights of the algebraic-geometric codes defined by a large class of curves in high-dimensional spaces, (3) some efficient improved geometric Goppa codes for disk memory systems and high-speed mass memory systems, and (4) a tree based approach for data compression using dynamic programming.

Top