Sample records for physics code cth

  1. Multitasking the three-dimensional shock wave code CTH on the Cray X-MP/416

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGlaun, J.M.; Thompson, S.L.

    1988-01-01

    CTH is a software system under development at Sandia National Laboratories Albuquerque that models multidimensional, multi-material, large-deformation, strong shock wave physics. CTH was carefully designed to both vectorize and multitask on the Cray X-MP/416. All of the physics routines are vectorized except the thermodynamics and the interface tracer. All of the physics routines are multitasked except the boundary conditions. The Los Alamos National Laboratory multitasking library was used for the multitasking. The resulting code is easy to maintain, easy to understand, gives the same answers as the unitasked code, and achieves a measured speedup of approximately 3.5 on the fourmore » cpu Cray. This document discusses the design, prototyping, development, and debugging of CTH. It also covers the architecture features of CTH that enhances multitasking, granularity of the tasks, and synchronization of tasks. The utility of system software and utilities such as simulators and interactive debuggers are also discussed. 5 refs., 7 tabs.« less

  2. Benchmarking the SPHINX and CTH shock physics codes for three problems in ballistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, L.T.; Hertel, E.; Schwalbe, L.

    1998-02-01

    The CTH Eulerian hydrocode, and the SPHINX smooth particle hydrodynamics (SPH) code were used to model a shock tube, two long rod penetrations into semi-infinite steel targets, and a long rod penetration into a spaced plate array. The results were then compared to experimental data. Both SPHINX and CTH modeled the one-dimensional shock tube problem well. Both codes did a reasonable job in modeling the outcome of the axisymmetric rod impact problem. Neither code correctly reproduced the depth of penetration in both experiments. In the 3-D problem, both codes reasonably replicated the penetration of the rod through the first plate.more » After this, however, the predictions of both codes began to diverge from the results seen in the experiment. In terms of computer resources, the run times are problem dependent, and are discussed in the text.« less

  3. Trinity Phase 2 Open Science: CTH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruggirello, Kevin Patrick; Vogler, Tracy

    CTH is an Eulerian hydrocode developed by Sandia National Laboratories (SNL) to solve a wide range of shock wave propagation and material deformation problems. Adaptive mesh refinement is also used to improve efficiency for problems with a wide range of spatial scales. The code has a history of running on a variety of computing platforms ranging from desktops to massively parallel distributed-data systems. For the Trinity Phase 2 Open Science campaign, CTH was used to study mesoscale simulations of the hypervelocity penetration of granular SiC powders. The simulations were compared to experimental data. A scaling study of CTH up tomore » 8192 KNL nodes was also performed, and several improvements were made to the code to improve the scalability.« less

  4. Return on Investment (ROI) Framework Case Study: CTH.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corro, Janna L.

    CTH is a Eulerian code developed at Sandia National Laboratories capable of modeling the hydrodynamic response of explosives, liquids, gases, and solids. The code solves complex multi-dimensional problems characterized by large deformations and strong shocks that are composed of various material configurations. CTH includes models for material strength, fracture, porosity, and high explosive detonation and initiation. The code is an acronym for a complex series of names relating to its origin. A full explanation can be seen in Appendix A. The software breaks penetration simulations into millions of grid-like “cells”. As a modeled projectile impacts and penetrates a target, progressivelymore » smaller blocks of cells are placed around the projectile, which show in detail deformations and breakups. Additionally, the code is uniquely suited to modeling blunt impact and blast loading leading to human body injury.« less

  5. Insensitive Munitions Modeling Improvement Efforts

    DTIC Science & Technology

    2010-10-01

    LLNL) ALE3D . Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to...codes most commonly used by munition designers are CTH and the SIERRA suite of codes produced by Sandia National Labs (SNL) and ALE3D produced by... ALE3D , a LLNL developed code, is also used by various DoD participants. It was however, designed differently than either CTH or Sierra. ALE3D is a

  6. Analysis of impact melt and vapor production in CTH for planetary applications

    DOE PAGES

    Quintana, S. N.; Crawford, D. A.; Schultz, P. H.

    2015-05-19

    This study explores impact melt and vapor generation for a variety of impact speeds and materials using the shock physics code CTH. The study first compares the results of two common methods of impact melt and vapor generation to demonstrate that both the peak pressure method and final temperature method are appropriate for high-speed impact models (speeds greater than 10 km/s). However, for low-speed impact models (speeds less than 10 km/s), only the final temperature method is consistent with laboratory analyses to yield melting and vaporization. Finally, a constitutive model for material strength is important for low-speed impacts because strengthmore » can cause an increase in melting and vaporization.« less

  7. Analysis of impact melt and vapor production in CTH for planetary applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quintana, S. N.; Crawford, D. A.; Schultz, P. H.

    This study explores impact melt and vapor generation for a variety of impact speeds and materials using the shock physics code CTH. The study first compares the results of two common methods of impact melt and vapor generation to demonstrate that both the peak pressure method and final temperature method are appropriate for high-speed impact models (speeds greater than 10 km/s). However, for low-speed impact models (speeds less than 10 km/s), only the final temperature method is consistent with laboratory analyses to yield melting and vaporization. Finally, a constitutive model for material strength is important for low-speed impacts because strengthmore » can cause an increase in melting and vaporization.« less

  8. Application of a computational glass model to the shock response of soda-lime glass

    DOE PAGES

    Gorfain, Joshua E.; Key, Christopher T.; Alexander, C. Scott

    2016-04-20

    This article details the implementation and application of the glass-specific computational constitutive model by Holmquist and Johnson [1] to simulate the dynamic response of soda-lime glass under high rate and high pressure shock conditions. The predictive capabilities of this model are assessed through comparison of experimental data with numerical results from computations using the CTH shock physics code. The formulation of this glass model is reviewed in the context of its implementation within CTH. Using a variety of experimental data compiled from the open literature, a complete parameterization of the model describing the observed behavior of soda-lime glass is developed.more » Simulation results using the calibrated soda-lime glass model are compared to flyer plate and Taylor rod impact experimental data covering a range of impact and failure conditions spanning an order of magnitude in velocity and pressure. In conclusion, the complex behavior observed in the experimental testing is captured well in the computations, demonstrating the capability of the glass model within CTH.« less

  9. Non-axisymmetric equilibrium reconstruction and suppression of density limit disruptions in a current-carrying stellarator

    NASA Astrophysics Data System (ADS)

    Ma, Xinxing; Ennis, D. A.; Hanson, J. D.; Hartwell, G. J.; Knowlton, S. F.; Maurer, D. A.

    2017-10-01

    Non-axisymmetric equilibrium reconstructions have been routinely performed with the V3FIT code in the Compact Toroidal Hybrid (CTH), a stellarator/tokamak hybrid. In addition to 50 external magnetic measurements, 160 SXR emissivity measurements are incorporated into V3FIT to reconstruct the magnetic flux surface geometry and infer the current distribution within the plasma. Improved reconstructions of current and q profiles provide insight into understanding the physics of density limit disruptions observed in current-carrying discharges in CTH. It is confirmed that the final scenario of the density limit of CTH plasmas is consistent with classic observations in tokamaks: current profile shrinkage leads to growing MHD instabilities (tearing modes) followed by a loss of MHD equilibrium. It is also observed that the density limit at a given current linearly increases with increasing amounts of 3D shaping fields. Consequently, plasmas with densities up to two times the Greenwald limit are attained. Equilibrium reconstructions show that addition of 3D fields effectively moves resonance surfaces towards the edge of the plasma where the current profile gradient is less, providing a stabilizing effect. This work is supported by US Department of Energy Grant No. DE-FG02-00ER54610.

  10. Improvements to the Sandia CTH Hydro-Code to Support Blast Analysis and Protective Design of Military Vehicles

    DTIC Science & Technology

    2014-04-15

    used for advertising or product endorsement purposes. 6.0 REFERENCES [1] McGlaun, J., Thompson, S. and Elrick, M. “CTH: A Three-Dimensional Shock-Wave...Validation of a Loading Model for Simulating Blast Mine Effects on Armoured Vehicles,” 7 th International LS-DYNA Users Conference, Detroit, MI 2002. [14

  11. Modeling Seismoacoustic Propagation from the Nonlinear to Linear Regimes

    NASA Astrophysics Data System (ADS)

    Chael, E. P.; Preston, L. A.

    2015-12-01

    Explosions at shallow depth-of-burial can cause nonlinear material response, such as fracturing and spalling, up to the ground surface above the shot point. These motions at the surface affect the generation of acoustic waves into the atmosphere, as well as the surface-reflected compressional and shear waves. Standard source scaling models for explosions do not account for such nonlinear interactions above the shot, while some recent studies introduce a non-isotropic addition to the moment tensor to represent them (e.g., Patton and Taylor, 2011). We are using Sandia's CTH shock physics code to model the material response in the vicinity of underground explosions, up to the overlying ground surface. Across a boundary where the motions have decayed to nearly linear behavior, we couple the signals from CTH into a linear finite-difference (FD) seismoacoustic code to efficiently propagate the wavefields to greater distances. If we assume only one-way transmission of energy through the boundary, then the particle velocities there suffice as inputs for the FD code, simplifying the specification of the boundary condition. The FD algorithm we use applies the wave equations for velocity in an elastic medium and pressure in an acoustic one, and matches the normal traction and displacement across the interface. Initially we are developing and testing a 2D, axisymmetric seismoacoustic routine; CTH can use this geometry in the source region as well. The Source Physics Experiment (SPE) in Nevada has collected seismic and acoustic data on numerous explosions at different scaled depths, providing an excellent testbed for investigating explosion phenomena (Snelson et al., 2013). We present simulations for shots SPE-4' and SPE-5, illustrating the importance of nonlinear behavior up to the ground surface. Our goal is to develop the capability for accurately predicting the relative signal strengths in the air and ground for a given combination of source yield and depth. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  12. Comparisons of CTH simulations with measured wave profiles for simple flyer plate experiments

    DOE PAGES

    Thomas, S. A.; Veeser, L. R.; Turley, W. D.; ...

    2016-06-13

    We conducted detailed 2-dimensional hydrodynamics calculations to assess the quality of simulations commonly used to design and analyze simple shock compression experiments. Such simple shock experiments also contain data where dynamic properties of materials are integrated together. We wished to assess how well the chosen computer hydrodynamic code could do at capturing both the simple parts of the experiments and the integral parts. We began with very simple shock experiments, in which we examined the effects of the equation of state and the compressional and tensile strength models. We increased complexity to include spallation in copper and iron and amore » solid-solid phase transformation in iron to assess the quality of the damage and phase transformation simulations. For experiments with a window, the response of both the sample and the window are integrated together, providing a good test of the material models. While CTH physics models are not perfect and do not reproduce all experimental details well, we find the models are useful; the simulations are adequate for understanding much of the dynamic process and for planning experiments. However, higher complexity in the simulations, such as adding in spall, led to greater differences between simulation and experiment. Lastly, this comparison of simulation to experiment may help guide future development of hydrodynamics codes so that they better capture the underlying physics.« less

  13. Nonlinear to Linear Elastic Code Coupling in 2-D Axisymmetric Media.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Leiph

    Explosions within the earth nonlinearly deform the local media, but at typical seismological observation distances, the seismic waves can be considered linear. Although nonlinear algorithms can simulate explosions in the very near field well, these codes are computationally expensive and inaccurate at propagating these signals to great distances. A linearized wave propagation code, coupled to a nonlinear code, provides an efficient mechanism to both accurately simulate the explosion itself and to propagate these signals to distant receivers. To this end we have coupled Sandia's nonlinear simulation algorithm CTH to a linearized elastic wave propagation code for 2-D axisymmetric media (axiElasti)more » by passing information from the nonlinear to the linear code via time-varying boundary conditions. In this report, we first develop the 2-D axisymmetric elastic wave equations in cylindrical coordinates. Next we show how we design the time-varying boundary conditions passing information from CTH to axiElasti, and finally we demonstrate the coupling code via a simple study of the elastic radius.« less

  14. Remote Sensing of Cloud Top Heights Using the Research Scanning Polarimeter

    NASA Technical Reports Server (NTRS)

    Sinclair, Kenneth; van Diedenhoven, Bastiaan; Cairns, Brian; Yorks, John; Wasilewski, Andrzej

    2015-01-01

    Clouds cover roughly two thirds of the globe and act as an important regulator of Earth's radiation budget. Of these, multilayered clouds occur about half of the time and are predominantly two-layered. Changes in cloud top height (CTH) have been predicted by models to have a globally averaged positive feedback, however observational changes in CTH have shown uncertain results. Additional CTH observations are necessary to better and quantify the effect. Improved CTH observations will also allow for improved sub-grid parameterizations in large-scale models and accurate CTH information is important when studying variations in freezing point and cloud microphysics. NASA's airborne Research Scanning Polarimeter (RSP) is able to measure cloud top height using a novel multi-angular contrast approach. RSP scans along the aircraft track and obtains measurements at 152 viewing angles at any aircraft location. The approach presented here aggregates measurements from multiple scans to a single location at cloud altitude using a correlation function designed to identify the location-distinct features in each scan. During NASAs SEAC4RS air campaign, the RSP was mounted on the ER-2 aircraft along with the Cloud Physics Lidar (CPL), which made simultaneous measurements of CTH. The RSPs unique method of determining CTH is presented. The capabilities of using single and combinations of channels within the approach are investigated. A detailed comparison of RSP retrieved CTHs with those of CPL reveal the accuracy of the approach. Results indicate a strong ability for the RSP to accurately identify cloud heights. Interestingly, the analysis reveals an ability for the approach to identify multiple cloud layers in a single scene and estimate the CTH of each layer. Capabilities and limitations of identifying single and multiple cloud layers heights are explored. Special focus is given to sources of error in the method including optically thin clouds, physically thick clouds, multi-layered clouds as well as cloud phase. When determining multi-layered CTHs, limits on the upper clouds opacity are assessed.

  15. Vacuum Magnetic Field Mapping of the Compact Toroidal Hybrid (CTH)

    NASA Astrophysics Data System (ADS)

    Peterson, J. T.; Hanson, J.; Hartwell, G. J.; Knowlton, S. F.; Montgomery, C.; Munoz, J.

    2007-11-01

    Vacuum magnetic field mapping experiments are performed on the CTH torsatron with a movable electron gun and phosphor-coated screen or movable wand at two different toroidal locations. These experiments compare the experimentally measured magnetic configuration produced by the as-built coil set, to the magnetic configuration simulated with the IFT Biot-Savart code using the measured coil set parameters. Efforts to minimize differences between the experimentally measured location of the magnetic axis and its predicted value utilizing a Singular Value Decomposition (SVD) process result in small modifications of the helical coil winding law used to model the vacuum magnetic field geometry of CTH. Because these studies are performed at relatively low fields B = 0.01 - 0.05 T, a uniform ambient magnetic field is included in the minimization procedure.

  16. NIMROD Simulations of Low-q Disruptions in the Compact Toroidal Hybrid Device (CTH)

    NASA Astrophysics Data System (ADS)

    Howell, E. C.; Pandya, M. D.; Hanson, J. D.; Mauer, D. A.; Ennis, D. A.; Hartwell, G. J.

    2016-10-01

    Nonlinear MHD simulations of low-q disruptions in the CTH are presented. CTH is a current carrying stellarator that is used to study the effects of 3D shaping. The application of 3D shaping stabilizes low-q disruptions in CTH. The amount of 3D shaping is controlled by adjusting the external rotational transform, and it is characterized by the ratio of the external rotational transform to the total transform: f =ιvac / ι . Disruptions are routinely observed during operation with weak shaping (f < 0.05). The frequency of disruptions decreases with increasing amounts of 3D shaping, and the disruptions are completely suppressed for f > 0.1 . Nonlinear simulations are performed using the NIMROD code to better understand how the shaping suppresses the disruptions. Comparisons of runs with weak (f = 0.04) and strong (f = 0.10) shaping are shown. This material is based upon work supported by Auburn University and the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences under Award Numbers DE-FG02-03ER54692 and DE-FG02-00ER54610.

  17. Testing and modeling of PBX-9591 shock initiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lam, Kim; Foley, Timothy; Novak, Alan

    2010-01-01

    This paper describes an ongoing effort to develop a detonation sensitivity test for PBX-9501 that is suitable for studying pristine and damaged HE. The approach involves testing and comparing the sensitivities of HE pressed to various densities and those of pre-damaged samples with similar porosities. The ultimate objectives are to understand the response of pre-damaged HE to shock impacts and to develop practical computational models for use in system analysis codes for HE safety studies. Computer simulation with the CTH shock physics code is used to aid the experimental design and analyze the test results. In the calculations, initiation andmore » growth or failure of detonation are modeled with the empirical HVRB model. The historical LANL SSGT and LSGT were reviewed and it was determined that a new, modified gap test be developed to satisfy the current requirements. In the new test, the donor/spacer/acceptor assembly is placed in a holder that is designed to work with fixtures for pre-damaging the acceptor sample. CTH simulations were made of the gap test with PBX-9501 samples pressed to three different densities. The calculated sensitivities were validated by test observations. The agreement between the computed and experimental critical gap thicknesses, ranging from 9 to 21 mm under various test conditions, is well within 1 mm. These results show that the numerical modeling is a valuable complement to the experimental efforts in studying and understanding shock initiation of PBX-9501.« less

  18. Overview, Progress, and Plans for the Compact Toroidal Hybrid Experiment

    NASA Astrophysics Data System (ADS)

    Hartwell, G. J.; Allen, N. R.; Ennis, D. A.; Hanson, J. D.; Howell, E. C.; Johnson, C. A.; Knowlton, S. F.; Kring, J. D.; Ma, X.; Maurer, D. A.; Ross, K. G.; Schmitt, J. C.; Traverso, P. J.; Williamson, E. N.

    2017-10-01

    The Compact Toroidal Hybrid (CTH) is an l = 2 , m = 5 torsatron/tokamak hybrid (R0 = 0.75 m, ap 0.2 m, and | B | <= 0.7 T) which generates highly configurable confining magnetic fields solely with external coils but typically uses up to 80 kA of plasma current for heating and disruption studies. The main goals of the CTH experiment are to study disruptive behavior as a function of applied 3D magnetic shaping, and to test and advance the V3FIT reconstruction code and NIMROD modeling of CTH. The disruptive density limit is observed to exceed the Greenwald limit as the vacuum transform is increased with no observed threshold for avoidance. Low-q operations (1.1 < q(a) < 2.0) are routine, with disruptions ceasing if the vacuum transform is raised above 0.07. Sawteeth are observed in CTH and have a similar phenomenology to tokamak sawteeth despite employing a 3D confining field. Application of vacuum transform has been demonstrated to reduce and eliminate the vertical drift of elongated discharges. Internal SXR diagnostics, in conjunction with external magnetics, extend the range of reconstruction accuracy into the plasma core. This work is supported by U.S. Department of Energy Grant No. DE-FG02-00ER54610.

  19. Extending SIESTA capabilities: removing field-periodic and stellarator symmetric limitations

    NASA Astrophysics Data System (ADS)

    Cook, C. R.; Hirshman, S. P.; Sanchez, R.; Anderson, D. T.

    2011-10-01

    SIESTA is a three-dimensional magnetohydrodynamics equilibrium code capable of resolving magnetic islands in toroidal plasma confinement devices. Currently SIESTA assumes that plasma perturbations, and thus also magnetic islands, are field-periodic. This limitation is being removed from the code by allowing the displacement toroidal mode number to not be restricted to multiples of the number of field periods. Extending SIESTA in this manner will allow larger, lower-order resonant islands to form in devices such as CTH. An example of a non-field-periodic perturbation in CTH will be demonstrated. Currently the code also operates in a stellarator-symmetric fashion in which an ``up-down'' symmetry is present at some toroidal angle. Nearly all of the current tokamaks (and ITER in the future) operate with a divertor and as such do not possess stellarator symmetry. Removal of this symmetry restriction requires including both sine and cosine terms in the Fourier expansion for the geometry of the device and the fields contained within. The current status of this extension of the code will be discussed, along with the method of implementation. U.S. DOE Contract No. DE-AC05-00OR22725 with UT-Battelle, LLC.

  20. Computational modeling of electrostatic charge and fields produced by hypervelocity impact

    DOE PAGES

    Crawford, David A.

    2015-05-19

    Following prior experimental evidence of electrostatic charge separation, electric and magnetic fields produced by hypervelocity impact, we have developed a model of electrostatic charge separation based on plasma sheath theory and implemented it into the CTH shock physics code. Preliminary assessment of the model shows good qualitative and quantitative agreement between the model and prior experiments at least in the hypervelocity regime for the porous carbonate material tested. The model agrees with the scaling analysis of experimental data performed in the prior work, suggesting that electric charge separation and the resulting electric and magnetic fields can be a substantial effectmore » at larger scales, higher impact velocities, or both.« less

  1. Calculation of Eddy Currents In the CTH Vacuum Vessel and Coil Frame

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. Zolfaghari, A. Brooks, A. Michaels, J. Hanson, and G. Hartwell

    2012-09-25

    Knowledge of eddy currents in the vacuum vessel walls and nearby conducting support structures can significantly contribute to the accuracy of Magnetohydrodynamics (MHD) equilibrium reconstruction in toroidal plasmas. Moreover, the magnetic fields produced by the eddy currents could generate error fields that may give rise to islands at rational surfaces or cause field lines to become chaotic. In the Compact Toroidal Hybrid (CTH) device (R0 = 0.75 m, a = 0.29 m, B ≤ 0.7 T), the primary driver of the eddy currents during the plasma discharge is the changing flux of the ohmic heating transformer. Electromagnetic simulations are usedmore » to calculate eddy current paths and profile in the vacuum vessel and in the coil frame pieces with known time dependent currents in the ohmic heating coils. MAXWELL and SPARK codes were used for the Electromagnetic modeling and simulation. MAXWELL code was used for detailed 3D finite-element analysis of the eddy currents in the structures. SPARK code was used to calculate the eddy currents in the structures as modeled with shell/surface elements, with each element representing a current loop. In both cases current filaments representing the eddy currents were prepared for input into VMEC code for MHD equilibrium reconstruction of the plasma discharge. __________________________________________________« less

  2. Dimensional scaling for impact cratering and perforation

    NASA Technical Reports Server (NTRS)

    Watts, Alan J.; Atkinson, Dale

    1995-01-01

    POD Associates have revisited the issue of generic scaling laws able to adequately predict (within better than 20 percent) cratering in semi-infinite targets and perforations through finite thickness targets. The approach used was to apply physical logic for hydrodynamics in a consistent manner able to account for chunky-body impacts such that the only variables needed are those directly related to known material properties for both the impactor and target. The analyses were compared and verified versus CTH hydrodynamic code calculations and existing data. Comparisons with previous scaling laws were also performed to identify which (if any) were good for generic purposes. This paper is a short synopsis of the full report available through the NASA Langley Research Center, LDEF Science Office.

  3. Microenergetic Shock Initiation Studies on Deposited Films of Petn

    NASA Astrophysics Data System (ADS)

    Tappan, Alexander S.; Wixom, Ryan R.; Trott, Wayne M.; Long, Gregory T.; Knepper, Robert; Brundage, Aaron L.; Jones, David A.

    2009-12-01

    Films of the high explosive PETN (pentaerythritol tetranitrate) up to 500-μm thick have been deposited through physical vapor deposition, with the intent of creating well-defined samples for shock-initiation studies. PETN films were characterized with microscopy, x-ray diffraction, and focused ion beam nanotomography. These high-density films were subjected to strong shocks in both the out-of-plane and in-plane orientations. Initiation behavior was monitored with high-speed framing and streak camera photography. Direct initiation with a donor explosive (either RDX with binder, or CL-20 with binder) was possible in both orientations, but with the addition of a thin aluminum buffer plate (in-plane configuration only), initiation proved to be difficult. Initiation was possible with an explosively-driven 0.13-mm thick Kapton flyer and direct observation of initiation behavior was examined using streak camera photography at different flyer velocities. Models of this configuration were created using the shock physics code CTH.

  4. Coherence Imaging Measurements of Impurity Flow in the CTH and W7-X Experiments

    NASA Astrophysics Data System (ADS)

    Ennis, D. A.; Allen, N. R.; Hartwell, G. J.; Johnson, C. A.; Maurer, D. A.; Allen, S. L.; Samuell, C. M.; Gradic, D.; Konig, R.; Perseo, V.; W7-X Team

    2017-10-01

    Measurements of impurity ion emissivity and velocity in the Compact Toroidal Hybrid (CTH) experiment are achieved with a new optical coherence imaging diagnostic. The Coherence Imaging Spectroscopy (CIS) technique uses an imaging interferometer of fixed delay to provide 2D spectral images, making it ideal for investigating the non-axisymmetric geometry of CTH plasmas. Preliminary analysis of C III interferograms indicate a net toroidal flow on the order of 10 km/s during the time of peak current. Bench tests using Zn and Cd light sources reveal that the temperature of the interferometer optics must be controlled to within 0.01°C to limit phase drift resulting in artificially measured flow. A new collaboration between Auburn University and the Max-Planck-Institute for Plasma Physics is underway to develop two new coherence imaging instruments for ion impurity flow measurements in orthogonal directions to investigate the 3D physics of the W7-X island divertor during OP1.2. A continuous wave laser tunable over most of the visible region will be incorporated to provide immediate and accurate calibrations of both CIS systems during plasma operations. Work supported by USDoE Grant DE-FG02-00ER54610.

  5. Effect of mental stress on cold pain in chronic tension-type headache sufferers.

    PubMed

    Cathcart, Stuart; Winefield, Anthony H; Lushington, Kurt; Rolan, Paul

    2009-10-01

    Mental stress is a noted contributing factor in chronic tension-type headache (CTH), however the mechanisms underlying this are not clearly understood. One proposition is that stress aggravates already increased pain sensitivity in CTH sufferers. This hypothesis could be partially tested by examining effects of mental stress on threshold and supra-threshold experimental pain processing in CTH sufferers. Such studies have not been reported to date. The present study measured pain detection and tolerance thresholds and ratings of supra-threshold pain stimulation from cold pressor test in CTH sufferers (CTH-S) and healthy Control (CNT) subjects exposed to a 60-min stressful mental task, and in CTH sufferers exposed to a 60-min neutral condition (CTH-N). Headache sufferers had lower pain tolerance thresholds and increased pain intensity ratings compared to controls. Pain detection and tolerance thresholds decreased and pain intensity ratings increased during the stress task, with a greater reduction in pain detection threshold and increase in pain intensity ratings in the CTH-S compared to CNT group. The results support the hypothesis that mental stress contributes to CTH through aggravating already increased pain sensitivity in CTH sufferers.

  6. Noxious inhibition of temporal summation is impaired in chronic tension-type headache.

    PubMed

    Cathcart, Stuart; Winefield, Anthony H; Lushington, Kurt; Rolan, Paul

    2010-03-01

    To examine effects of stress on noxious inhibition and temporal summation (TS) in tension-type headache. Stress is the most commonly reported trigger of a chronic tension-type headache (CTH) episode; however, the mechanisms underlying this are unclear. Stress affects pain processing throughout the central nervous system, including, potentially, mechanisms of TS and diffuse noxious inhibitory controls (DNIC), both of which may be abnormal in CTH sufferers (CTH-S). No studies have examined TS of pressure pain or DNIC of TS in CTH-S to date. Similarly, effects of stress on TS or DNIC of TS have not been reported in healthy subjects or CTH-S to date. The present study measured TS and DNIC of TS in CTH-S and healthy controls (CNT) exposed to an hour-long stressful mental task, and in CTH-S exposed to an hour-long neutral condition. TS was elicited at finger and shoulder via 10 pulses from a pressure algometer, applied before and during stimulation from an occlusion cuff at painful intensity. Algometer pain ratings increased more in the CTH compared with the CNT group, and were inhibited during occlusion cuff more in the CNT compared with CTH groups. Task effects on TS or DNIC were not significant. The results indicate increased TS to pressure pain and impaired DNIC of TS in CTH-S. Stress does not appear to aggravate abnormal TS or DNIC mechanisms in CTH-S.

  7. Central roles of iron in the regulation of oxidative stress in the yeast Saccharomyces cerevisiae.

    PubMed

    Matsuo, Ryo; Mizobuchi, Shogo; Nakashima, Maya; Miki, Kensuke; Ayusawa, Dai; Fujii, Michihiko

    2017-10-01

    Oxygen is essential for aerobic organisms but causes cytotoxicity probably through the generation of reactive oxygen species (ROS). In this study, we screened for the genes that regulate oxidative stress in the yeast Saccharomyces cerevisiae, and found that expression of CTH2/TIS11 caused an increased resistance to ROS. CTH2 is up-regulated upon iron starvation and functions to remodel metabolism to adapt to iron starvation. We showed here that increased resistance to ROS by CTH2 would likely be caused by the decreased ROS production due to the decreased activity of mitochondrial respiration, which observation is consistent with the fact that CTH2 down-regulates the mitochondrial respiratory proteins. We also found that expression of CTH1, a paralog of CTH2, also caused an increased resistance to ROS. This finding supported the above view, because mitochondrial respiratory proteins are the common targets of CTH1 and CTH2. We further showed that supplementation of iron in medium augmented the growth of S. cerevisiae under oxidative stress, and expression of CTH2 and supplementation of iron collectively enhanced its growth under oxidative stress. Since CTH2 is regulated by iron, these findings suggested that iron played crucial roles in the regulation of oxidative stress in S. cerevisiae.

  8. Remote Sensing of Multiple Cloud Layer Heights Using Multi-Angular Measurements

    NASA Technical Reports Server (NTRS)

    Sinclair, Kenneth; Van Diedenhoven, Bastiaan; Cairns, Brian; Yorks, John; Wasilewski, Andrzej; Mcgill, Matthew

    2017-01-01

    Cloud top height (CTH) affects the radiative properties of clouds. Improved CTH observations will allow for improved parameterizations in large-scale models and accurate information on CTH is also important when studying variations in freezing point and cloud microphysics. NASAs airborne Research Scanning Polarimeter (RSP) is able to measure cloud top height using a novel multi-angular contrast approach. For the determination of CTH, a set of consecutive nadir reflectances is selected and the cross-correlations between this set and co-located sets at other viewing angles are calculated for a range of assumed cloud top heights, yielding a correlation profile. Under the assumption that cloud reflectances are isotropic, local peaks in the correlation profile indicate cloud layers. This technique can be applied to every RSP footprint and we demonstrate that detection of multiple peaks in the correlation profile allow retrieval of heights of multiple cloud layers within single RSP footprints. This paper provides an in-depth description of the architecture and performance of the RSPs CTH retrieval technique using data obtained during the Studies of Emissions and Atmospheric Composition, Clouds and Climate Coupling by Regional Surveys (SEAC(exp. 4)RS) campaign. RSP retrieved cloud heights are evaluated using collocated data from the Cloud Physics Lidar (CPL). The method's accuracy associated with the magnitude of correlation, optical thickness, cloud thickness and cloud height are explored. The technique is applied to measurements at a wavelength of 670 nm and 1880 nm and their combination. The 1880-nm band is virtually insensitive to the lower troposphere due to strong water vapor absorption.

  9. Pain sensitivity mediates the relationship between stress and headache intensity in chronic tension-type headache.

    PubMed

    Cathcart, Stuart; Bhullar, Navjot; Immink, Maarten; Della Vedova, Chris; Hayball, John

    2012-01-01

    A central model for chronic tension-type headache (CTH) posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. The prediction from this model that pain sensitivity mediates the relationship between stress and headache activity has not yet been examined. To determine whether pain sensitivity mediates the relationship between stress and prospective headache activity in CTH sufferers. Self-reported stress, pain sensitivity and prospective headache activity were measured in 53 CTH sufferers recruited from the general population. Pain sensitivity was modelled as a mediator between stress and headache activity, and tested using a nonparametric bootstrap analysis. Pain sensitivity significantly mediated the relationship between stress and headache intensity. The results of the present study support the central model for CTH, which posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. Implications for the mechanisms and treatment of CTH are discussed.

  10. Final report for the Tera Computer TTI CRADA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, G.S.; Pavlakos, C.; Silva, C.

    1997-01-01

    Tera Computer and Sandia National Laboratories have completed a CRADA, which examined the Tera Multi-Threaded Architecture (MTA) for use with large codes of importance to industry and DOE. The MTA is an innovative architecture that uses parallelism to mask latency between memories and processors. The physical implementation is a parallel computer with high cross-section bandwidth and GaAs processors designed by Tera, which support many small computation threads and fast, lightweight context switches between them. When any thread blocks while waiting for memory accesses to complete, another thread immediately begins execution so that high CPU utilization is maintained. The Tera MTAmore » parallel computer has a single, global address space, which is appealing when porting existing applications to a parallel computer. This ease of porting is further enabled by compiler technology that helps break computations into parallel threads. DOE and Sandia National Laboratories were interested in working with Tera to further develop this computing concept. While Tera Computer would continue the hardware development and compiler research, Sandia National Laboratories would work with Tera to ensure that their compilers worked well with important Sandia codes, most particularly CTH, a shock physics code used for weapon safety computations. In addition to that important code, Sandia National Laboratories would complete research on a robotic path planning code, SANDROS, which is important in manufacturing applications, and would evaluate the MTA performance on this code. Finally, Sandia would work directly with Tera to develop 3D visualization codes, which would be appropriate for use with the MTA. Each of these tasks has been completed to the extent possible, given that Tera has just completed the MTA hardware. All of the CRADA work had to be done on simulators.« less

  11. A neural hypothesis for stress-induced headache.

    PubMed

    Cathcart, Stuart

    2009-12-01

    The mechanisms by which stress contributes to CTH are not clearly understood. The commonly accepted notion of muscle hyper-reactivity to stress in CTH sufferers is not supported in the research data. We propose a neural model whereby stress acts supra-spinally to aggravate already increased pain sensitivity in CTH sufferers. Indirect support for the model comes from emerging research elucidating complex supra-spinal networks through which psychological stress may contribute to and even cause pain. Similarly, emerging research demonstrates supra-spinal pain processing abnormalities in CTH sufferers. While research with CTH sufferers offering direct support for the model is lacking at present, initial work by our group is consistent with the models predictions, particularly, that stress aggravates already increased pain sensitivity in CTH sufferers.

  12. Simulations of Low-q Disruptions in the Compact Toroidal Hybrid Experiment

    NASA Astrophysics Data System (ADS)

    Howell, E. C.; Hanson, J. D.; Ennis, D. A.; Hartwell, G. J.; Maurer, D. A.

    2017-10-01

    Resistive MHD simulations of low-q disruptions in the Compact Toroidal Hybrid Device (CTH) are performed using the NIMROD code. CTH is a current-carrying stellarator used to study the effects of 3D shaping on MHD stability. Experimentally, it is observed that the application of 3D vacuum fields allows CTH to operate with edge safety factor less than 2.0. However, these low-q discharges often disrupt after peak current if the applied 3D fields are too weak. Nonlinear simulations are initialized using model VMEC equilibria representative of low-q discharges with weak vacuum transform. Initially a series of symmetry preserving island chains are excited at the q=6/5, 7/5, 8/5, and 9/5 rational surfaces. These island chains act as transport barriers preventing stochastic magnetic fields in the edge from penetrating into the core. As the simulation progresses, predominately m/n=3/2 and 4/3 instabilities are destabilized. As these instabilities grow to large amplitude they destroy the symmetry preserving islands leading to large regions of stochastic fields. A current spike and loss of core thermal confinement occurs when the innermost island chain (6/5) is destroyed. Work Supported by US-DOE Grant #DE-FG02-03ER54692.

  13. Pain sensitivity mediates the relationship between stress and headache intensity in chronic tension-type headache

    PubMed Central

    Cathcart, Stuart; Bhullar, Navjot; Immink, Maarten; Della Vedova, Chris; Hayball, John

    2012-01-01

    BACKGROUND: A central model for chronic tension-type headache (CTH) posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. The prediction from this model that pain sensitivity mediates the relationship between stress and headache activity has not yet been examined. OBJECTIVE: To determine whether pain sensitivity mediates the relationship between stress and prospective headache activity in CTH sufferers. METHOD: Self-reported stress, pain sensitivity and prospective headache activity were measured in 53 CTH sufferers recruited from the general population. Pain sensitivity was modelled as a mediator between stress and headache activity, and tested using a nonparametric bootstrap analysis. RESULTS: Pain sensitivity significantly mediated the relationship between stress and headache intensity. CONCLUSIONS: The results of the present study support the central model for CTH, which posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. Implications for the mechanisms and treatment of CTH are discussed. PMID:23248808

  14. Microenergetic Shock Initiation Studies on Deposited Films of PETN

    NASA Astrophysics Data System (ADS)

    Tappan, Alexander S.; Wixom, Ryan R.; Trott, Wayne M.; Long, Gregory T.; Knepper, Robert; Brundage, Aaron L.; Jones, David A.

    2009-06-01

    Films of the high explosive PETN (pentaerythritol tetranitrate) up to 500-μm thick have been deposited through physical vapor deposition, with the intent of creating well-defined samples for shock-initiation studies. PETN films were characterized with surface profilometry, scanning electron microscopy, x-ray diffraction, and focused ion beam nanotomography. These high-density films were subjected to strong shocks in both the in-plane and out-of-plane orientations. Initiation behavior was monitored with high-speed framing and streak camera photography. Direct initiation with a donor explosive (either RDX with binder, or CL-20 with binder) was possible in both orientations, but with the addition of a thin aluminum buffer plate (in-plane configuration only), initiation proved to be difficult due to the attenuated shock and the high density of the PETN films. Mesoscale models of microenergetic samples were created using the shock physics code CTH and compared with experimental results. The results of these experiments will be discussed in the context of small sample geometry, deposited film morphology, and density.

  15. Hemizygosity of transsulfuration genes confers increased vulnerability against acetaminophen-induced hepatotoxicity in mice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagiya, Yoshifumi; Kamata, Shotaro; Mitsuoka, Saya

    2015-01-15

    The key mechanism for acetaminophen hepatotoxicity is cytochrome P450 (CYP)-dependent formation of N-acetyl-p-benzoquinone imine, a potent electrophile that forms protein adducts. Previous studies revealed the fundamental role of glutathione, which binds to and detoxifies N-acetyl-p-benzoquinone imine. Glutathione is synthesized from cysteine in the liver, and N-acetylcysteine is used as a sole antidote for acetaminophen poisoning. Here, we evaluated the potential roles of transsulfuration enzymes essential for cysteine biosynthesis, cystathionine β-synthase (CBS) and cystathionine γ-lyase (CTH), in acetaminophen hepatotoxicity using hemizygous (Cbs{sup +/−} or Cth{sup +/−}) and homozygous (Cth{sup −/−}) knockout mice. At 4 h after intraperitoneal acetaminophen injection, serum alaninemore » aminotransferase levels were highly elevated in Cth{sup −/−} mice at 150 mg/kg dose, and also in Cbs{sup +/−} or Cth{sup +/−} mice at 250 mg/kg dose, which was associated with characteristic centrilobular hepatocyte oncosis. Hepatic glutathione was depleted while serum malondialdehyde accumulated in acetaminophen-injected Cth{sup −/−} mice but not wild-type mice, although glutamate–cysteine ligase (composed of catalytic [GCLC] and modifier [GCLM] subunits) became more activated in the livers of Cth{sup −/−} mice with lower K{sub m} values for Cys and Glu. Proteome analysis using fluorescent two-dimensional difference gel electrophoresis revealed 47 differentially expressed proteins after injection of 150 mg acetaminophen/kg into Cth{sup −/−} mice; the profiles were similar to 1000 mg acetaminophen/kg-treated wild-type mice. The prevalence of Cbs or Cth hemizygosity is estimated to be 1:200–300 population; therefore, the deletion or polymorphism of either transsulfuration gene may underlie idiosyncratic acetaminophen vulnerability along with the differences in Cyp, Gclc, and Gclm gene activities. - Highlights: • Cbs{sup +/−}, Cth{sup +/−}, and especially Cth{sup −/−} mice were susceptible to APAP hepatic injury. • Hepatic glutathione became rapidly depleted upon APAP injection in Cth{sup −/−} mice. • Hepatic glutamate–cysteine ligase was activated by APAP injection and CTH deletion. • 2D DIGE identified 47 differentially expressed hepatic proteins by APAP injection. • Both transsulfuration enzymes are essential for protection against APAP injury.« less

  16. Hydrothermal preparation and physicochemical studies of new copper nano-complexes for antitumor application

    NASA Astrophysics Data System (ADS)

    Saif, M.; El-Shafiy, Hoda F.; Mashaly, Mahmoud M.; Eid, Mohamed F.; Nabeel, A. I.; Fouad, R.

    2018-03-01

    Two novel nano-complexes [(Cu)2(L) (NO3)2(OH2)] (CuH) and [Cu(HL) (OH2)2(NO3)] (CuCTH)were synthesized by hydrothermal method at 200 °C for 48 h in absence and presence of surfactant (CTAB), respectively. Introducing surfactant (CTAB) leads to changing stoichiometric metal/ligand ratio from binuclear (CuH) to mononuclear (CuCTH) nano-complexes. CuH shows irregular nano-flake shape while CuCTH have separately uniform nano-spherical morphology. Thermal analysis revealed that CuCTH is thermally stable in comparison with CuH Nano-complex. CuCTH absorption peak shifted to shorter wavelength (blue shift) and sharpness of the peak also decreased in presence of CTAB. The role of CTAB in the crystal growth is discussed. CuH and CuCTH nano-complexes were tested for their in vitro cytotoxicity against Ehrlich Ascites Carcinoma cell line (E.A.C.). Both nano-complexes effectively inhibited E.A.C. growth with IC50value of 37 and 25 μM for CuH and CuCTH, respectively. The high antitumor activity of CuCTH was attributed to several factors such as spherical morphology, smaller size, chemical structure, and geometry. The LD50 for high cytotoxic CuCTH nano-complex on mice was found to be 100 mg/kg with strong abscess in abdomen side effect. To overcome this side effect, different molar ratio of CuCTH and previously prepared ZnNano-complexes were tested for their in vitrocytotoxicity and in vivo toxicity. Obtained results show that the 2:8 M ratio between CuCTH and Zn nano-complexes gives very low toxicity without any side effects. Also, geometric optimization and conformational analysis were performed using semi-empirical PM3 method. Energy gap (ΔE), dipole moment, and structure activity relationship were performed and discussed.

  17. Plasma serotonin in patients with chronic tension headaches.

    PubMed

    Anthony, M; Lance, J W

    1989-02-01

    Previous reports have suggested that platelet level of serotonin in chronic tension headache (CTH) is lower than in normal control subjects, and that there is continuous activation of platelets both in migraine and in CTH. In this study we compared platelet serotonin concentration in 95 patients with CTH, 166 patients with migraine and 35 normal control subjects. Mean platelet serotonin (ng/10(9) platelets) was 310 for the CTH group, 384 during migraine headache, 474 for normal control subjects and 514 in headache-free migrainous patients. There was significant statistical difference of values between CTH patients and those of normal control subjects as well as headache-free migrainous patients, but not of those of migrainous patients during headache. It is suggested that CTH is a low serotonin syndrome, representing one end of the spectrum of idiopathic headache, the other end being represented by migraine.

  18. Structural and magnetic diversity in cyano-bridged bi- and trimetallic complexes assembled from cyanometalates and [M(rac-CTH)]n+ building blocks (CTH = d,l-5,5,7,12,12,14-hexamethyl-1,4,8,11-tetraazacyclotetradecane).

    PubMed

    Rodríguez-Diéguez, Antonio; Kivekäs, Raikko; Sillanpää, Reijo; Cano, Joan; Lloret, Francesc; McKee, Vickie; Stoeckli-Evans, Helen; Colacio, Enrique

    2006-12-25

    Seven new cyano-bridged heterometallic systems have been prepared by assembling [M'(rac-CTH)]n+ complexes (M' = CrIII, NiII, CuII), which have two cis available coordination positions, and [M(CN)6]3- (M = FeIII, CrIII) and [Fe(CN)2(bpy)2]+ cyanometalate building blocks. The assembled systems, which have been characterized by X-ray crystallography and magnetic investigations, are the molecular squares (meso-CTH-H2)[{Ni(rac-CTH)}2{Fe(CN)6)}2].5H2O (2) and [{Ni(rac-CTH)}2{Fe(CN)2(bpy)2}2](ClO4)4.H2O (5), the bimetallic chain [{Ni(rac-CTH)}2{Cr(CN)6)}2Ni(meso-CTH)].4H2O (3), the trimetallic chain [{Ni(rac-CTH)}2{Fe(CN)6)}2Cu(cyclam)]6H2O (4), the pentanuclear complexes [{Cu(rac-CTH}3{Fe(CN)6}2].2H2O (6) and [{Cu(rac-CTH)}3{Cr(CN)6)}2].2H2O (7), and the dinuclear complex [Cr(rac-CTH)(H2O)Fe(CN)6].2H2O (8). With the exception of 5, all compounds exhibit ferromagnetic interaction between the metal ions (JFeNi = 12.8(2) cm-1 for 2; J1FeCu= 13.8(2) cm-1 and J2FeCu= 3.9(4) cm-1 for 6; J1CrCu= 6.95(3) cm-1 and J2CrCu= 1.9(2)cm-1 for 7; JCrFe = 28.87(3) cm-1 for 8). Compound 5 exhibits the end of a transition from the high-spin to the low-spin state of the octahedral FeII ions. The bimetallic chain 3 behaves as a metamagnet with a critical field Hc = 300 G, which is associated with the occurrence of week antiferromagnetic interactions between the chains. Although the trimetallic chain 4 shows some degree of spin correlation along the chain, magnetic ordering does not occur. The sign and magnitude of the magnetic exchange interaction between CrIII and FeIII in compound 8 have been justified by DFT type calculations.

  19. Improved Strength and Damage Modeling of Geologic Materials

    NASA Astrophysics Data System (ADS)

    Stewart, Sarah; Senft, Laurel

    2007-06-01

    Collisions and impact cratering events are important processes in the evolution of planetary bodies. The time and length scales of planetary collisions, however, are inaccessible in the laboratory and require the use of shock physics codes. We present the results from a new rheological model for geological materials implemented in the CTH code [1]. The `ROCK' model includes pressure, temperature, and damage effects on strength, as well as acoustic fluidization during impact crater collapse. We demonstrate that the model accurately reproduces final crater shapes, tensile cracking, and damaged zones from laboratory to planetary scales. The strength model requires basic material properties; hence, the input parameters may be benchmarked to laboratory results and extended to planetary collision events. We show the effects of varying material strength parameters, which are dependent on both scale and strain rate, and discuss choosing appropriate parameters for laboratory and planetary situations. The results are a significant improvement in models of continuum rock deformation during large scale impact events. [1] Senft, L. E., Stewart, S. T. Modeling Impact Cratering in Layered Surfaces, J. Geophys. Res., submitted.

  20. Improving Metallic Thermal Protection System Hypervelocity Impact Resistance Through Design of Experiments Approach

    NASA Technical Reports Server (NTRS)

    Poteet, Carl C.; Blosser, Max L.

    2001-01-01

    A design of experiments approach has been implemented using computational hypervelocity impact simulations to determine the most effective place to add mass to an existing metallic Thermal Protection System (TPS) to improve hypervelocity impact protection. Simulations were performed using axisymmetric models in CTH, a shock-physics code developed by Sandia National Laboratories, and validated by comparison with existing test data. The axisymmetric models were then used in a statistical sensitivity analysis to determine the influence of five design parameters on degree of hypervelocity particle dispersion. Several damage metrics were identified and evaluated. Damage metrics related to the extent of substructure damage were seen to produce misleading results, however damage metrics related to the degree of dispersion of the hypervelocity particle produced results that corresponded to physical intuition. Based on analysis of variance results it was concluded that the most effective way to increase hypervelocity impact resistance is to increase the thickness of the outer foil layer. Increasing the spacing between the outer surface and the substructure is also very effective at increasing dispersion.

  1. Design of orbital debris shields for oblique hypervelocity impact

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.

    1994-01-01

    A new impact debris propagation code was written to link CTH simulations of space debris shield perforation to the Lagrangian finite element code DYNA3D, for space structure wall impact simulations. This software (DC3D) simulates debris cloud evolution using a nonlinear elastic-plastic deformable particle dynamics model, and renders computationally tractable the supercomputer simulation of oblique impacts on Whipple shield protected structures. Comparison of three dimensional, oblique impact simulations with experimental data shows good agreement over a range of velocities of interest in the design of orbital debris shielding. Source code developed during this research is provided on the enclosed floppy disk. An abstract based on the work described was submitted to the 1994 Hypervelocity Impact Symposium.

  2. Effects of cross-sex hormone treatment on cortical thickness in transsexual individuals.

    PubMed

    Zubiaurre-Elorza, Leire; Junque, Carme; Gómez-Gil, Esther; Guillamon, Antonio

    2014-05-01

    Untreated transsexuals have a brain cortical phenotype. Cross-sex hormone treatments are used to masculinize or feminize the bodies of female-to-male (FtMs) or male-to-female (MtFs) transsexuals, respectively. A longitudinal design was conducted to investigate the effects of treatments on brain cortical thickness (CTh) of FtMs and MtFs. This study investigated 15 female-to-male (FtMs) and 14 male-to-female (MtFs) transsexuals prior and during at least six months of cross-sex hormone therapy treatment. Brain MRI imaging was performed in a 3-Tesla TIM-TRIO Siemens scanner. T1-weighted images were analyzed with FreeSurfer software to obtain CTh as well as subcortical volumetric values. Changes in brain CTh thickness and volumetry associated to changes in hormonal levels due to cross-sex hormone therapy. After testosterone treatment, FtMs showed increases of CTh bilaterally in the postcentral gyrus and unilaterally in the inferior parietal, lingual, pericalcarine, and supramarginal areas of the left hemisphere and the rostral middle frontal and the cuneus region of the right hemisphere. There was a significant positive correlation between the serum testosterone and free testosterone index changes and CTh changes in parieto-temporo-occipital regions. In contrast, MtFs, after estrogens and antiandrogens treatment, showed a general decrease in CTh and subcortical volumetric measures and an increase in the volume of the ventricles. Testosterone therapy increases CTh in FtMs. Thickening in cortical regions is associated to changes in testosterone levels. Estrogens and antiandrogens therapy in MtFs is associated to a decrease in the CTh that consequently induces an enlargement of the ventricular system. © 2014 International Society for Sexual Medicine.

  3. Prism users guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weirs, V. Gregory

    2012-03-01

    Prism is a ParaView plugin that simultaneously displays simulation data and material model data. This document describes its capabilities and how to use them. A demonstration of Prism is given in the first section. The second section contains more detailed notes on less obvious behavior. The third and fourth sections are specifically for Alegra and CTH users. They tell how to generate the simulation data and SESAME files and how to handle aspects of Prism use particular to each of these codes.

  4. High-average-power CTH:YAG for the medical environment

    NASA Astrophysics Data System (ADS)

    Wright, Sidney P.; Adamkiewicz, Edward J.; Moulton, Peter F.

    1992-06-01

    Medical procedures such as arthroscopy have placed increasing demands on the output performance of the CTH:YAG laser at 2.1 micrometers . Intensive research has been conducted to improve the average power, pulse energies, and rep rates while reducing any failure mechanisms. The results of this work is reported along with a discussion of the important engineering parameters concerning the design of a high power medical CTH:YAG laser.

  5. The effects of capillary transit time heterogeneity (CTH) on brain oxygenation

    PubMed Central

    Angleys, Hugo; Østergaard, Leif; Jespersen, Sune N

    2015-01-01

    We recently extended the classic flow–diffusion equation, which relates blood flow to tissue oxygenation, to take capillary transit time heterogeneity (CTH) into account. Realizing that cerebral oxygen availability depends on both cerebral blood flow (CBF) and capillary flow patterns, we have speculated that CTH may be actively regulated and that changes in the capillary morphology and function, as well as in blood rheology, may be involved in the pathogenesis of conditions such as dementia and ischemia-reperfusion injury. The first extended flow–diffusion equation involved simplifying assumptions which may not hold in tissue. Here, we explicitly incorporate the effects of oxygen metabolism on tissue oxygen tension and extraction efficacy, and assess the extent to which the type of capillary transit time distribution affects the overall effects of CTH on flow–metabolism coupling reported earlier. After incorporating tissue oxygen metabolism, our model predicts changes in oxygen consumption and tissue oxygen tension during functional activation in accordance with literature reports. We find that, for large CTH values, a blood flow increase fails to cause significant improvements in oxygen delivery, and can even decrease it; a condition of malignant CTH. These results are found to be largely insensitive to the choice of the transit time distribution. PMID:25669911

  6. Validation of numerical codes for impact and explosion cratering: Impacts on strengthless and metal targets

    NASA Astrophysics Data System (ADS)

    Pierazzo, E.; Artemieva, N.; Asphaug, E.; Baldwin, E. C.; Cazamias, J.; Coker, R.; Collins, G. S.; Crawford, D. A.; Davison, T.; Elbeshausen, D.; Holsapple, K. A.; Housen, K. R.; Korycansky, D. G.; Wünnemann, K.

    2008-12-01

    Over the last few decades, rapid improvement of computer capabilities has allowed impact cratering to be modeled with increasing complexity and realism, and has paved the way for a new era of numerical modeling of the impact process, including full, three-dimensional (3D) simulations. When properly benchmarked and validated against observation, computer models offer a powerful tool for understanding the mechanics of impact crater formation. This work presents results from the first phase of a project to benchmark and validate shock codes. A variety of 2D and 3D codes were used in this study, from commercial products like AUTODYN, to codes developed within the scientific community like SOVA, SPH, ZEUS-MP, iSALE, and codes developed at U.S. National Laboratories like CTH, SAGE/RAGE, and ALE3D. Benchmark calculations of shock wave propagation in aluminum-on-aluminum impacts were performed to examine the agreement between codes for simple idealized problems. The benchmark simulations show that variability in code results is to be expected due to differences in the underlying solution algorithm of each code, artificial stability parameters, spatial and temporal resolution, and material models. Overall, the inter-code variability in peak shock pressure as a function of distance is around 10 to 20%. In general, if the impactor is resolved by at least 20 cells across its radius, the underestimation of peak shock pressure due to spatial resolution is less than 10%. In addition to the benchmark tests, three validation tests were performed to examine the ability of the codes to reproduce the time evolution of crater radius and depth observed in vertical laboratory impacts in water and two well-characterized aluminum alloys. Results from these calculations are in good agreement with experiments. There appears to be a general tendency of shock physics codes to underestimate the radius of the forming crater. Overall, the discrepancy between the model and experiment results is between 10 and 20%, similar to the inter-code variability.

  7. A multi-subunit Chlamydia vaccine inducing neutralizing antibodies and strong IFN-γ⁺ CMI responses protects against a genital infection in minipigs.

    PubMed

    Bøje, Sarah; Olsen, Anja Weinreich; Erneholm, Karin; Agerholm, Jørgen Steen; Jungersen, Gregers; Andersen, Peter; Follmann, Frank

    2016-02-01

    Chlamydia is the most widespread sexually transmitted bacterial disease and a prophylactic vaccine is highly needed. Ideally, this vaccine is required to induce a combined response of Th1 cell-mediated immune (CMI) response in concert with neutralizing antibodies. Using a novel Göttingen minipig animal model, we evaluated the immunogenicity and efficacy of a multi-subunit vaccine formulated in the strong Th1-inducing adjuvant CAF01. We evaluated a mixture of two fusion proteins (Hirep1 and CTH93) designed to promote either neutralizing antibodies or cell-mediated immunity, respectively. Hirep1 is a novel immunogen based on the variant domain (VD) 4 region from major outer membrane protein (MOMP) serovar (Sv) D, SvE and SvF, and CTH93 is a fusion molecule of three antigens (CT043, CT414 and MOMP). Pigs were immunized twice intramuscularly with either Hirep1+CTH93/CAF01, UV-inactivated Chlamydia trachomatis SvD bacteria (UV-SvD/CAF01) or CAF01. The Hirep1+CTH93/CAF01 vaccine induced a strong CMI response against the vaccine antigens and high titers of antibodies, particularly against the VD4 region of MOMP. Sera from Hirep1+CTH93/CAF01 immunized pigs neutralized C. trachomatis SvD and SvF infectivity in vitro. Both Hirep1+CTH93/CAF01 and UV-SvD/CAF01 vaccination protected pigs against a vaginal C. trachomatis SvD infection. In conclusion, the Hirep1+CTH93/CAF01 vaccine proved highly immunogenic and equally protective as UV-SvD/CAF01 showing promise for the development of a subunit vaccine against Chlamydia.

  8. A multi-subunit Chlamydia vaccine inducing neutralizing antibodies and strong IFN-γ+ CMI responses protects against a genital infection in minipigs

    PubMed Central

    Bøje, Sarah; Olsen, Anja Weinreich; Erneholm, Karin; Agerholm, Jørgen Steen; Jungersen, Gregers; Andersen, Peter; Follmann, Frank

    2016-01-01

    Chlamydia is the most widespread sexually transmitted bacterial disease and a prophylactic vaccine is highly needed. Ideally, this vaccine is required to induce a combined response of Th1 cell-mediated immune (CMI) response in concert with neutralizing antibodies. Using a novel Göttingen minipig animal model, we evaluated the immunogenicity and efficacy of a multi-subunit vaccine formulated in the strong Th1-inducing adjuvant CAF01. We evaluated a mixture of two fusion proteins (Hirep1 and CTH93) designed to promote either neutralizing antibodies or cell-mediated immunity, respectively. Hirep1 is a novel immunogen based on the variant domain (VD) 4 region from major outer membrane protein (MOMP) serovar (Sv) D, SvE and SvF, and CTH93 is a fusion molecule of three antigens (CT043, CT414 and MOMP). Pigs were immunized twice intramuscularly with either Hirep1+CTH93/CAF01, UV-inactivated Chlamydia trachomatis SvD bacteria (UV-SvD/CAF01) or CAF01. The Hirep1+CTH93/CAF01 vaccine induced a strong CMI response against the vaccine antigens and high titers of antibodies, particularly against the VD4 region of MOMP. Sera from Hirep1+CTH93/CAF01 immunized pigs neutralized C. trachomatis SvD and SvF infectivity in vitro. Both Hirep1+CTH93/CAF01 and UV-SvD/CAF01 vaccination protected pigs against a vaginal C. trachomatis SvD infection. In conclusion, the Hirep1+CTH93/CAF01 vaccine proved highly immunogenic and equally protective as UV-SvD/CAF01 showing promise for the development of a subunit vaccine against Chlamydia. PMID:26268662

  9. Comparison of cloud top heights derived from FY-2 meteorological satellites with heights derived from ground-based millimeter wavelength cloud radar

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Wang, Zhenhui; Cao, Xiaozhong; Tao, Fa

    2018-01-01

    Clouds are currently observed by both ground-based and satellite remote sensing techniques. Each technique has its own strengths and weaknesses depending on the observation method, instrument performance and the methods used for retrieval. It is important to study synergistic cloud measurements to improve the reliability of the observations and to verify the different techniques. The FY-2 geostationary orbiting meteorological satellites continuously observe the sky over China. Their cloud top temperature product can be processed to retrieve the cloud top height (CTH). The ground-based millimeter wavelength cloud radar can acquire information about the vertical structure of clouds-such as the cloud base height (CBH), CTH and the cloud thickness-and can continuously monitor changes in the vertical profiles of clouds. The CTHs were retrieved using both cloud top temperature data from the FY-2 satellites and the cloud radar reflectivity data for the same time period (June 2015 to May 2016) and the resulting datasets were compared in order to evaluate the accuracy of CTH retrievals using FY-2 satellites. The results show that the concordance rate of cloud detection between the two datasets was 78.1%. Higher consistencies were obtained for thicker clouds with larger echo intensity and for more continuous clouds. The average difference in the CTH between the two techniques was 1.46 km. The difference in CTH between low- and mid-level clouds was less than that for high-level clouds. An attenuation threshold of the cloud radar for rainfall was 0.2 mm/min; a rainfall intensity below this threshold had no effect on the CTH. The satellite CTH can be used to compensate for the attenuation error in the cloud radar data.

  10. Brief mindfulness-based therapy for chronic tension-type headache: a randomized controlled pilot study.

    PubMed

    Cathcart, Stuart; Galatis, Nicola; Immink, Maarten; Proeve, Michael; Petkov, John

    2014-01-01

    Mindfulness-based therapy (MBT) has been demonstrated to be effective for reducing chronic pain symptoms; however, the use of MBT for Chronic Tension-Type Headache (CTH) exclusively has to date not been examined. Typically, MBT for chronic pain has involved an 8-week program based on Mindfulness Based Stress Reduction. Recent research suggests briefer mindfulness-based treatments may be effective for chronic pain. To conduct a pilot study into the efficacy of brief MBT for CTH. We conducted a randomized controlled trial of a brief (6-session, 3-week) MBT for CTH. Results indicated a significant decrease in headache frequency and an increase in the mindfulness facet of Observe in the treatment but not wait-list control group. Brief MBT may be an effective intervention for CTH.

  11. Central mechanisms of stress-induced headache.

    PubMed

    Cathcart, S; Petkov, J; Winefield, A H; Lushington, K; Rolan, P

    2010-03-01

    Stress is the most commonly reported trigger of an episode of chronic tension-type headache (CTTH); however, the causal significance has not been experimentally demonstrated to date. Stress may trigger CTTH through hyperalgesic effects on already sensitized pain pathways in CTTH sufferers. This hypothesis could be partially tested by examining pain sensitivity in an experimental model of stress-induced headache in CTTH sufferers. Such examinations have not been reported to date. We measured pericranial muscle tenderness and pain thresholds at the finger, head and shoulder in 23 CTTH sufferers (CTH-S) and 25 healthy control subjects (CNT) exposed to an hour-long stressful mental task, and in 23 CTTH sufferers exposed to an hour-long neutral condition (CTH-N). Headache developed in 91% of CTH-S, 4% of CNT, and 17% of CTH-N subjects. Headache sufferers had increased muscle tenderness and reduced pain thresholds compared with healthy controls. During the task, muscle tenderness increased and pain thresholds decreased in the CTH-S group compared with CTH-N and CNT groups. Pre-task muscle tenderness and reduction in pain threshold during task were predictive of the development and intensity of headache following task. The main findings are that stress induced a headache in CTTH sufferers, and this was associated with pre-task muscle tenderness and stress-induced reduction in pain thresholds. The results support the hypothesis that stress triggers CTTH through hyperalgesic effects on already increased pain sensitivity in CTTH sufferers, reducing the threshold to noxious input from pericranial structures.

  12. Determination of current and rotational transform profiles in a current-carrying stellarator using soft x-ray emissivity measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, X.; Cianciosa, M. R.; Ennis, D. A.

    In this research, collimated soft X-ray (SXR) emissivity measurements from multi-channel cameras on the Compact Toroidal Hybrid (CTH) tokamak/torsatron device are incorporated in the 3D equilibrium reconstruction code V3FIT to reconstruct the shape of flux surfaces and infer the current distribution within the plasma. Equilibrium reconstructions of sawtoothing plasmas that use data from both SXR and external magnetic diagnostics show the central safety factor to be near unity under the assumption that SXR iso-emissivity contours lie on magnetic flux surfaces. The reconstruction results are consistent with those using the external magnetic data and a constraint on the location of qmore » = 1 surfaces determined from the sawtooth inversion surface extracted from SXR brightness profiles. The agreement justifies the use of approximating SXR emission as a flux function in CTH, at least within the core of the plasma, subject to the spatial resolution of the SXR diagnostics. Lastly, this improved reconstruction of the central current density indicates that the current profile peakedness decreases with increasing external transform and that the internal inductance is not a relevant measure of how peaked the current profile is in hybrid discharges.« less

  13. Determination of current and rotational transform profiles in a current-carrying stellarator using soft x-ray emissivity measurements

    NASA Astrophysics Data System (ADS)

    Ma, X.; Cianciosa, M. R.; Ennis, D. A.; Hanson, J. D.; Hartwell, G. J.; Herfindal, J. L.; Howell, E. C.; Knowlton, S. F.; Maurer, D. A.; Traverso, P. J.

    2018-01-01

    Collimated soft X-ray (SXR) emissivity measurements from multi-channel cameras on the Compact Toroidal Hybrid (CTH) tokamak/torsatron device are incorporated in the 3D equilibrium reconstruction code V3FIT to reconstruct the shape of flux surfaces and infer the current distribution within the plasma. Equilibrium reconstructions of sawtoothing plasmas that use data from both SXR and external magnetic diagnostics show the central safety factor to be near unity under the assumption that SXR iso-emissivity contours lie on magnetic flux surfaces. The reconstruction results are consistent with those using the external magnetic data and a constraint on the location of q = 1 surfaces determined from the sawtooth inversion surface extracted from SXR brightness profiles. The agreement justifies the use of approximating SXR emission as a flux function in CTH, at least within the core of the plasma, subject to the spatial resolution of the SXR diagnostics. This improved reconstruction of the central current density indicates that the current profile peakedness decreases with increasing external transform and that the internal inductance is not a relevant measure of how peaked the current profile is in hybrid discharges.

  14. Determination of current and rotational transform profiles in a current-carrying stellarator using soft x-ray emissivity measurements

    DOE PAGES

    Ma, X.; Cianciosa, M. R.; Ennis, D. A.; ...

    2018-01-31

    In this research, collimated soft X-ray (SXR) emissivity measurements from multi-channel cameras on the Compact Toroidal Hybrid (CTH) tokamak/torsatron device are incorporated in the 3D equilibrium reconstruction code V3FIT to reconstruct the shape of flux surfaces and infer the current distribution within the plasma. Equilibrium reconstructions of sawtoothing plasmas that use data from both SXR and external magnetic diagnostics show the central safety factor to be near unity under the assumption that SXR iso-emissivity contours lie on magnetic flux surfaces. The reconstruction results are consistent with those using the external magnetic data and a constraint on the location of qmore » = 1 surfaces determined from the sawtooth inversion surface extracted from SXR brightness profiles. The agreement justifies the use of approximating SXR emission as a flux function in CTH, at least within the core of the plasma, subject to the spatial resolution of the SXR diagnostics. Lastly, this improved reconstruction of the central current density indicates that the current profile peakedness decreases with increasing external transform and that the internal inductance is not a relevant measure of how peaked the current profile is in hybrid discharges.« less

  15. Novel Co:MgF2 lidar for aerosol profiler

    NASA Technical Reports Server (NTRS)

    Acharekar, M. A.

    1993-01-01

    Lidars are of great interest because of their unique capabilities in remote sensing applications in sounding of the atmosphere, meteorology, and climatology. In this small business innovative research (SBIR) phase II program, laser sources including Co:MgF2, CTH:YAG, CTH:YSGG, CT:YAG, and Er:Glass were evaluated. Modulator of fused silica and TeO2 materials with Brewster's angle end faces were used with these lasers as acousto-optical (AO) Q-switches. A higher hold-off energy and hence a higher Q-switched energy was obtained by using a high power RF driver. The report provides performance characteristics of these lasers. The tunable (1.75-2.50 microns) Co:MgF2 laser damaged the TeO2 Q-switch cell. However, the CTH:YAG laser operating at 2.09 microns provided output energy of over 300 mJ/p in 50 ns pulse width using the fused silica Q-switch. This Q-switched CTH:YAG laser was used in a breadboard vertical aerosol profiler. A 40 cm diameter telescope, InSb and InGaAs detectors were used in the receiver. The data obtained using this lidar is provided in the report. The data shows that the eye safe lidar using CTH:YAG laser for the vertical aerosol density and range measurements is the viable approach.

  16. Characterization of the 2′,3′ cyclic phosphodiesterase activities of Clostridium thermocellum polynucleotide kinase-phosphatase and bacteriophage λ phosphatase

    PubMed Central

    Keppetipola, Niroshika; Shuman, Stewart

    2007-01-01

    Clostridium thermocellum polynucleotide kinase-phosphatase (CthPnkp) catalyzes 5′ and 3′ end-healing reactions that prepare broken RNA termini for sealing by RNA ligase. The central phosphatase domain of CthPnkp belongs to the dinuclear metallophosphoesterase superfamily exemplified by bacteriophage λ phosphatase (λ-Pase). CthPnkp is a Ni2+/Mn2+-dependent phosphodiesterase-monoesterase, active on nucleotide and non-nucleotide substrates, that can be transformed toward narrower metal and substrate specificities via mutations of the active site. Here we characterize the Mn2+-dependent 2′,3′ cyclic nucleotide phosphodiesterase activity of CthPnkp, the reaction most relevant to RNA repair pathways. We find that CthPnkp prefers a 2′,3′ cyclic phosphate to a 3′,5′ cyclic phosphate. A single H189D mutation imposes strict specificity for a 2′,3′ cyclic phosphate, which is cleaved to form a single 2′-NMP product. Analysis of the cyclic phosphodiesterase activities of mutated CthPnkp enzymes illuminates the active site and the structural features that affect substrate affinity and kcat. We also characterize a previously unrecognized phosphodiesterase activity of λ-Pase, which catalyzes hydrolysis of bis-p-nitrophenyl phosphate. λ-Pase also has cyclic phosphodiesterase activity with nucleoside 2′,3′ cyclic phosphates, which it hydrolyzes to yield a mixture of 2′-NMP and 3′-NMP products. We discuss our results in light of available structural and functional data for other phosphodiesterase members of the binuclear metallophosphoesterase family and draw inferences about how differences in active site composition influence catalytic repertoire. PMID:17986465

  17. Minimum magnetic curvature for resilient divertors using Compact Toroidal Hybrid geometry

    NASA Astrophysics Data System (ADS)

    Bader, A.; Hegna, C. C.; Cianciosa, M.; Hartwell, G. J.

    2018-05-01

    The properties of resilient divertors are explored using equilibria derived from Compact Toroidal Hybrid (CTH) geometries. Resilience is defined here as the robustness of the strike point patterns as the plasma geometry and/or plasma profiles are changed. The addition of plasma current in the CTH configurations significantly alters the shape of the last closed flux surface and the rotational transform profile, however, it does not alter the strike point pattern on the target plates, and hence has resilient divertor features. The limits of when a configuration transforms to a resilient configuration is then explored. New CTH-like configurations are generated that vary from a perfectly circular cross section to configurations with increasing amounts of toroidal shaping. It is found that even small amounts of toroidal shaping lead to strike point localization that is similar to the standard CTH configuration. These results show that only a small degree of three-dimensional shaping is necessary to produce a resilient divertor, implying that any highly shaped optimized stellarator will possess the resilient divertor property.

  18. Cystathionine γ-Lyase-Produced Hydrogen Sulfide Controls Endothelial NO Bioavailability and Blood Pressure.

    PubMed

    Szijártó, István András; Markó, Lajos; Filipovic, Milos R; Miljkovic, Jan Lj; Tabeling, Christoph; Tsvetkov, Dmitry; Wang, Ning; Rabelo, Luiza A; Witzenrath, Martin; Diedrich, André; Tank, Jens; Akahoshi, Noriyuki; Kamata, Shotaro; Ishii, Isao; Gollasch, Maik

    2018-06-01

    Hydrogen sulfide (H 2 S) and NO are important gasotransmitters, but how endogenous H 2 S affects the circulatory system has remained incompletely understood. Here, we show that CTH or CSE (cystathionine γ-lyase)-produced H 2 S scavenges vascular NO and controls its endogenous levels in peripheral arteries, which contribute to blood pressure regulation. Furthermore, eNOS (endothelial NO synthase) and phospho-eNOS protein levels were unaffected, but levels of nitroxyl were low in CTH-deficient arteries, demonstrating reduced direct chemical interaction between H 2 S and NO. Pretreatment of arterial rings from CTH-deficient mice with exogenous H 2 S donor rescued the endothelial vasorelaxant response and decreased tissue NO levels. Our discovery that CTH-produced H 2 S inhibits endogenous endothelial NO bioavailability and vascular tone is novel and fundamentally important for understanding how regulation of vascular tone is tailored for endogenous H 2 S to contribute to systemic blood pressure function. © 2018 American Heart Association, Inc.

  19. Sex differences in cortical thickness and their possible genetic and sex hormonal underpinnings.

    PubMed

    Savic, I; Arver, S

    2014-12-01

    Although it has been shown that cortical thickness (Cth) differs between sexes, the underlying mechanisms are unknown. Seeing as XXY males have 1 extra X chromosome, we investigated the possible effects of X- and sex-chromosome dosage on Cth by comparing data from 31 XXY males with 39 XY and 47 XX controls. Plasma testosterone and estrogen were also measured in an effort to differentiate between possible sex-hormone and sex-chromosome gene effects. Cth was calculated with FreeSurfer software. Parietal and occipital Cth was greater in XX females than XY males. In these regions Cth was inversely correlated with z-normalized testosterone. In the motor strip, the cortex was thinner in XY males compared with both XX females and XXY males, indicating the possibility of an X-chromosome gene-dosage effect. XXY males had thinner right superior temporal and left middle temporal cortex, and a thicker right orbitofrontal cortex and lingual cortex than both control groups. Based on these data and previous reports from women with XO monosomy, it is hypothesized that programming of the motor cortex is influenced by processes linked to X-escapee genes, which do not have Y-chromosome homologs, and that programming of the superior temporal cortex is mediated by X-chromosome escapee genes with Y-homologs. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Remote Sensing of Cloud Top Height from SEVIRI: Analysis of Eleven Current Retrieval Algorithms

    NASA Technical Reports Server (NTRS)

    Hamann, U.; Walther, A.; Baum, B.; Bennartz, R.; Bugliaro, L.; Derrien, M.; Francis, P. N.; Heidinger, A.; Joro, S.; Kniffka, A.; hide

    2014-01-01

    The role of clouds remains the largest uncertainty in climate projections. They influence solar and thermal radiative transfer and the earth's water cycle. Therefore, there is an urgent need for accurate cloud observations to validate climate models and to monitor climate change. Passive satellite imagers measuring radiation at visible to thermal infrared (IR) wavelengths provide a wealth of information on cloud properties. Among others, the cloud top height (CTH) - a crucial parameter to estimate the thermal cloud radiative forcing - can be retrieved. In this paper we investigate the skill of ten current retrieval algorithms to estimate the CTH using observations from the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) onboard Meteosat Second Generation (MSG). In the first part we compare ten SEVIRI cloud top pressure (CTP) data sets with each other. The SEVIRI algorithms catch the latitudinal variation of the CTP in a similar way. The agreement is better in the extratropics than in the tropics. In the tropics multi-layer clouds and thin cirrus layers complicate the CTP retrieval, whereas a good agreement among the algorithms is found for trade wind cumulus, marine stratocumulus and the optically thick cores of the deep convective system. In the second part of the paper the SEVIRI retrievals are compared to CTH observations from the Cloud-Aerosol LIdar with Orthogonal Polarization (CALIOP) and Cloud Profiling Radar (CPR) instruments. It is important to note that the different measurement techniques cause differences in the retrieved CTH data. SEVIRI measures a radiatively effective CTH, while the CTH of the active instruments is derived from the return time of the emitted radar or lidar signal. Therefore, some systematic differences are expected. On average the CTHs detected by the SEVIRI algorithms are 1.0 to 2.5 kilometers lower than CALIOP observations, and the correlation coefficients between the SEVIRI and the CALIOP data sets range between 0.77 and 0.90. The average CTHs derived by the SEVIRI algorithms are closer to the CPR measurements than to CALIOP measurements. The biases between SEVIRI and CPR retrievals range from -0.8 kilometers to 0.6 kilometers. The correlation coefficients of CPR and SEVIRI observations vary between 0.82 and 0.89. To discuss the origin of the CTH deviation, we investigate three cloud categories: optically thin and thick single layer as well as multi-layer clouds. For optically thick clouds the correlation coefficients between the SEVIRI and the reference data sets are usually above 0.95. For optically thin single layer clouds the correlation coefficients are still above 0.92. For this cloud category the SEVIRI algorithms yield CTHs that are lower than CALIOP and similar to CPR observations. Most challenging are the multi-layer clouds, where the correlation coefficients are for most algorithms between 0.6 and 0.8. Finally, we evaluate the performance of the SEVIRI retrievals for boundary layer clouds. While the CTH retrieval for this cloud type is relatively accurate, there are still considerable differences between the algorithms. These are related to the uncertainties and limited vertical resolution of the assumed temperature profiles in combination with the presence of temperature inversions, which lead to ambiguities in the CTH retrieval. Alternative approaches for the CTH retrieval of low clouds are discussed.

  1. Interannual variability of high ice cloud properties over the tropics

    NASA Astrophysics Data System (ADS)

    Tamura, S.; Iwabuchi, H.

    2015-12-01

    The El Niño/Southern Oscillation (ENSO) affects atmospheric conditions and cloud physical properties such as cloud fraction (CF) and cloud top height (CTH). However, an impact of the ENSO on physical properties in high-ice cloud is not well known. Therefore, this study attempts to reveal relationship between variability of ice cloud physical properties and ENSO. Ice clouds are inferred with the multiband IR method in this study. Ice clouds are categorized in terms of cloud optical thickness (COT) as thin (0.1< COT <0.3), opaque (0.3< COT <3.6), thick (3.6< COT <11), and deep convective (DC) (11< COT) clouds, and relationship between ENSO and interannual variability of cloud physical properties is investigated for each category during the period from January 2003 to December 2014. The deseasonalized anomalies of CF and CTH in all categories correlate well with Niño3.4 index, with positive anomaly over the eastern Pacific and negative anomaly over the western Pacific during El Niño condition. However, the global distribution of these correlation coefficients is different by cloud categories. For example, CF of DC correlates well with Niño3.4 index over the convergence zone, while, that of thin cloud shows high correlation extending to high latitude from convergence zone, suggesting a connection with cloud formation. The global distributions of average rate of change differ by cloud category, because the different associate with ENSO and gradual trend toward La Niña condition had occurred over the analysis period. In this conference, detailed results and relationship between variability of cloud physical properties and atmospheric conditions will be shown.

  2. Modeling shock-driven reaction in low density PMDI foam

    NASA Astrophysics Data System (ADS)

    Brundage, Aaron; Alexander, C. Scott; Reinhart, William; Peterson, David

    Shock experiments on low density polyurethane foams reveal evidence of reaction at low impact pressures. However, these reaction thresholds are not evident over the low pressures reported for historical Hugoniot data of highly distended polyurethane at densities below 0.1 g/cc. To fill this gap, impact data given in a companion paper for polymethylene diisocyanate (PMDI) foam with a density of 0.087 g/cc were acquired for model validation. An equation of state (EOS) was developed to predict the shock response of these highly distended materials over the full range of impact conditions representing compaction of the inert material, low-pressure decomposition, and compression of the reaction products. A tabular SESAME EOS of the reaction products was generated using the JCZS database in the TIGER equilibrium code. In particular, the Arrhenius Burn EOS, a two-state model which transitions from an unreacted to a reacted state using single step Arrhenius kinetics, as implemented in the shock physics code CTH, was modified to include a statistical distribution of states. Hence, a single EOS is presented that predicts the onset to reaction due to shock loading in PMDI-based polyurethane foams. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's NNSA under Contract DE-AC04-94AL85000.

  3. Wedge Experiment Modeling and Simulation for Reactive Flow Model Calibration

    NASA Astrophysics Data System (ADS)

    Maestas, Joseph T.; Dorgan, Robert J.; Sutherland, Gerrit T.

    2017-06-01

    Wedge experiments are a typical method for generating pop-plot data (run-to-detonation distance versus input shock pressure), which is used to assess an explosive material's initiation behavior. Such data can be utilized to calibrate reactive flow models by running hydrocode simulations and successively tweaking model parameters until a match between experiment is achieved. Typical simulations are performed in 1D and typically use a flyer impact to achieve the prescribed shock loading pressure. In this effort, a wedge experiment performed at the Army Research Lab (ARL) was modeled using CTH (SNL hydrocode) in 1D, 2D, and 3D space in order to determine if there was any justification in using simplified models. A simulation was also performed using the BCAT code (CTH companion tool) that assumes a plate impact shock loading. Results from the simulations were compared to experimental data and show that the shock imparted into an explosive specimen is accurately captured with 2D and 3D simulations, but changes significantly in 1D space and with the BCAT tool. The difference in shock profile is shown to only affect numerical predictions for large run distances. This is attributed to incorrectly capturing the energy fluence for detonation waves versus flat shock loading. Portions of this work were funded through the Joint Insensitive Munitions Technology Program.

  4. Prediction of Shock-Induced Cavitation in Water

    NASA Astrophysics Data System (ADS)

    Brundage, Aaron

    2013-06-01

    Fluid-structure interaction problems that require estimating the response of thin structures within fluids to shock loading has wide applicability. For example, these problems may include underwater explosions and the dynamic response of ships and submarines; and biological applications such as Traumatic Brain Injury (TBI) and wound ballistics. In all of these applications the process of cavitation, where small cavities with dissolved gases or vapor are formed as the local pressure drops below the vapor pressure due to shock hydrodynamics, can cause significant damage to the surrounding thin structures or membranes if these bubbles collapse, generating additional shock loading. Hence, a two-phase equation of state (EOS) with three distinct regions of compression, expansion, and tension was developed to model shock-induced cavitation. This EOS was evaluated by comparing data from pressure and temperature shock Hugoniot measurements for water up to 400 kbar, and data from ultrasonic pressure measurements in tension to -0.3 kbar, to simulated responses from CTH, an Eulerian, finite volume shock code. The new EOS model showed significant improvement over pre-existing CTH models such as the SESAME EOS for capturing cavitation. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy/NNSA under contract DE-AC04-94AL85000.

  5. Minimum magnetic curvature for resilient divertors using Compact Toroidal Hybrid geometry

    DOE PAGES

    Bader, Aaron; Hegna, C. C.; Cianciosa, Mark R.; ...

    2018-03-16

    The properties of resilient divertors are explored using equilibria derived from Compact Toroidal Hybrid (CTH) geometries. Resilience is defined here as the robustness of the strike point patterns as the plasma geometry and/or plasma profiles are changed. The addition of plasma current in the CTH configurations significantly alters the shape of the last closed flux surface and the rotational transform profile, however, it does not alter the strike point pattern on the target plates, and hence has resilient divertor features. The limits of when a configuration transforms to a resilient configuration is then explored. New CTH-like configurations are generated thatmore » vary from a perfectly circular cross section to configurations with increasing amounts of toroidal shaping. It is found that even small amounts of toroidal shaping lead to strike point localization that is similar to the standard CTH configuration. Lastly, these results show that only a small degree of three-dimensional shaping is necessary to produce a resilient divertor, implying that any highly shaped optimized stellarator will possess the resilient divertor property.« less

  6. Primary hepatocytes from mice lacking cysteine dioxygenase show increased cysteine concentrations and higher rates of metabolism of cysteine to hydrogen sulfide and thiosulfate

    PubMed Central

    Jurkowska, Halina; Roman, Heather B.; Hirschberger, Lawrence L.; Sasakura, Kiyoshi; Nagano, Tetsuo; Hanaoka, Kenjiro; Krijt, Jakub

    2016-01-01

    The oxidation of cysteine in mammalian cells occurs by two routes: a highly regulated direct oxidation pathway in which the first step is catalyzed by cysteine dioxygenase (CDO) and by desulfhydration-oxidation pathways in which the sulfur is released in a reduced oxidation state. To assess the effect of a lack of CDO on production of hydrogen sulfide (H2S) and thiosulfate (an intermediate in the oxidation of H2S to sulfate) and to explore the roles of both cystathionine γ-lyase (CTH) and cystathionine β-synthase (CBS) in cysteine desulfhydration by liver, we investigated the metabolism of cysteine in hepatocytes isolated from Cdo1-null and wild-type mice. Hepatocytes from Cdo1-null mice produced more H2S and thiosulfate than did hepatocytes from wild-type mice. The greater flux of cysteine through the cysteine desulfhydration reactions catalyzed by CTH and CBS in hepatocytes from Cdo1-null mice appeared to be the consequence of their higher cysteine levels, which were due to the lack of CDO and hence lack of catabolism of cysteine by the cysteinesulfinate-dependent pathways. Both CBS and CTH appeared to contribute substantially to cysteine desulfhydration, with estimates of 56 % by CBS and 44 % by CTH in hepatocytes from wild-type mice, and 63 % by CBS and 37 % by CTH in hepatocytes from Cdo1-null mice. PMID:24609271

  7. Preliminarily Assessment of Long-term Cloud Top Heights in Central Taiwan

    NASA Astrophysics Data System (ADS)

    Lai, Y. J.; Po-Hsiung, L.

    2015-12-01

    The Xitou region, as the epitome of mid-elevation forest ecosystem and known as a famous forest recreation area in Taiwan. Although two disasters, "921 earthquake" in 1999 and typhoon Toraji in 2001, heavily hit this area and cause a significant reduction in visitors from 1 to about 0.4 million per year, the tourists have returned after the reconstruction in 2003 and approached 1.5 million high since 2010. The high quantity of tourists obviously drives the development of tourism industry which, unfortunately, increases the local sources of heating. A preliminarily analysis showed the warming rate was 0.29 oC/decade for June 2005 to May 2013 while from the 1940s to the 1980s, it was only 0.1 oC/decade. The warming pattern in Xitou region is similar to the global warming situation that a more dramatic trend happened during the past 10 years. The change of land use, which derived from the pressure of tourism industry, might accelerate regional climate warming. For the purpose of understanding cloud response to anthropogenic forcing, the long-term 1-km spatial resolution cloud top heights (cth) data sets (collection 6) from the Moderate Resolution Imaging Spectroradiometer (MODIS) were assessed. The results showed the annual cloud event amounts of the Terra and Aqua changed insignificantly since 2003 disregard of the cth. However, the cloud fraction of the cth less than 2000m was 18% in 2003 and dropped dramatically to 7% since 2011. Correspondingly, the cth between 2000m to 4000m was increased from 35% in 2003 to 45% in 2014. Further analysis the nighttime events indicated similar pattern but only 6% different between 2003 and 2014. The Aqua daytime events showed a more dramatic fraction anomaly which was decreased 18% at the cth less than 2000m and increased 18% at the cth between 2000m to 4000m. This preliminary assessment represents the cloud is pushing higher which might be caused by the anthropogenic forcing during the last decade. However, this study also found that the cth data sets were sensitive to the upgrade of inversion model and satellite calibration in 2010 which might also be another important consideration. A total solution of integrating ceilometer, ground lidar, spaceborne lidar and UAV profile observations for monitoring/understanding the characteristics of Xitou microclimate change are still on-going.

  8. Implementation of ERDC HEP Geo-Material Model in CTH and Application

    DTIC Science & Technology

    2011-11-02

    used TARDEC JWL inputs for C4 and Johnson- Cook Strength inputs   TARDEC JC fracture model inputs for 5083 plate changed due to problems seen in...fracture inputs from IMD tests -  LS-DYNA C4 JWL and Johnson-Cook strength inputs used in CTH runs -  Results indicate that TARDEC JC fracture model

  9. CTH Implementation of a Two-Phase Material Model With Strength: Application to Porous Materials

    DTIC Science & Technology

    2012-07-01

    he worked in the Lavrentyev Institute of Hydrodynamics (Russian Academy of Science) in the area of constitutive modelling for problems of high...velocity impact. Anatoly obtained a PhD in Physics and Mathematics from the Institute of Hydrodynamics in 1985. In 1996-1998 he worked in a private...silica in the present consideration. Further work is planned to account for a phase transition using the three-phase modelling approach [1]. In the

  10. Atmospheric ammonia measurements at low concentration ...

    EPA Pesticide Factsheets

    We evaluated the relative importance of dry deposition of ammonia (NH3) gas at several headwater areas of the Susquehanna River, the largest single source of nitrogen pollution to Chesapeake Bay, including three that are remote from major sources of NH3 emissions (CTH, ARN, and KEF) and one (HFD) that is near a major agricultural source. We also examined the importance of nitrogen dioxide (NO2) deposition at one of these sites. Over the past decade, increasing evidence has suggested that NH3 deposition, in particular, may be an important contributor to total nitrogen deposition and to downstream nitrogen pollution. We used Ogawa passive samplers to measure NH3 concentrations over several years (2006–2011) for CTH, and primarily in 2008 and 2009 for the other sites. NO2 was measured at CTH mainly in 2007. Chamber calibration studies for NH3 and NO2, and field comparisons with annular denuders for NH3, validated the use of these passive samplers over a range of temperatures and humidity observed in the field, if attention is given to field and laboratory blank issues. The annual mean NH3 concentrations for the forested sites were 0.41 ± 0.03, 0.41 ± 0.06 and 0.25 ± 0.08 µg NH3/m3 for CTH, ARN and KEF, respectively. NO2 passive sampler mean annual concentration was 3.19 ± 0.42 µg NO2/m3 at CTH. Direct comparison of our measured values with the widely used Community Multiscale Air Quality (CMAQ) model (v4.7.1) show reasonably good agreement. However, the mod

  11. A chemical potentiator of copper-accumulation used to investigate the iron-regulons of Saccharomyces cerevisiae.

    PubMed

    Foster, Andrew W; Dainty, Samantha J; Patterson, Carl J; Pohl, Ehmke; Blackburn, Hannah; Wilson, Clare; Hess, Corinna R; Rutherford, Julian C; Quaranta, Laura; Corran, Andy; Robinson, Nigel J

    2014-07-01

    The extreme resistance of Saccharomyces cerevisiae to copper is overcome by 2-(6-benzyl-2-pyridyl)quinazoline (BPQ), providing a chemical-biology tool which has been exploited in two lines of discovery. First, BPQ is shown to form a red (BPQ)2 Cu(I) complex and promote Ctr1-independent copper-accumulation in whole cells and in mitochondria isolated from treated cells. Multiple phenotypes, including loss of aconitase activity, are consistent with copper-BPQ mediated damage to mitochondrial iron-sulphur clusters. Thus, a biochemical basis of copper-toxicity in S. cerevisiae is analogous to other organisms. Second, iron regulons controlled by Aft1/2, Cth2 and Yap5 that respond to mitochondrial iron-sulphur cluster status are modulated by copper-BPQ causing iron hyper-accumulation via upregulated iron-import. Comparison of copper-BPQ treated, untreated and copper-only treated wild-type and fra2Δ by RNA-seq has uncovered a new candidate Aft1 target-gene (LSO1) and paralogous non-target (LSO2), plus nine putative Cth2 target-transcripts. Two lines of evidence confirm that Fra2 dominates basal repression of the Aft1/2 regulons in iron-replete cultures. Fra2-independent control of these regulons is also observed but CTH2 itself appears to be atypically Fra2-dependent. However, control of Cth2-target transcripts which is independent of CTH2 transcript abundance or of Fra2, is also quantified. Use of copper-BPQ supports a substantial contribution of metabolite repression to iron-regulation. © 2014 The Authors. Molecular Microbiology published by John Wiley & Sons Ltd.

  12. A chemical potentiator of copper-accumulation used to investigate the iron-regulons of Saccharomyces cerevisiae

    PubMed Central

    Foster, Andrew W; Dainty, Samantha J; Patterson, Carl J; Pohl, Ehmke; Blackburn, Hannah; Wilson, Clare; Hess, Corinna R; Rutherford, Julian C; Quaranta, Laura; Corran, Andy; Robinson, Nigel J

    2014-01-01

    The extreme resistance of Saccharomyces cerevisiae to copper is overcome by 2-(6-benzyl-2-pyridyl)quinazoline (BPQ), providing a chemical-biology tool which has been exploited in two lines of discovery. First, BPQ is shown to form a red (BPQ)2Cu(I) complex and promote Ctr1-independent copper-accumulation in whole cells and in mitochondria isolated from treated cells. Multiple phenotypes, including loss of aconitase activity, are consistent with copper-BPQ mediated damage to mitochondrial iron–sulphur clusters. Thus, a biochemical basis of copper-toxicity in S. cerevisiae is analogous to other organisms. Second, iron regulons controlled by Aft1/2, Cth2 and Yap5 that respond to mitochondrial iron–sulphur cluster status are modulated by copper-BPQ causing iron hyper-accumulation via upregulated iron-import. Comparison of copper-BPQ treated, untreated and copper-only treated wild-type and fra2Δ by RNA-seq has uncovered a new candidate Aft1 target-gene (LSO1) and paralogous non-target (LSO2), plus nine putative Cth2 target-transcripts. Two lines of evidence confirm that Fra2 dominates basal repression of the Aft1/2 regulons in iron-replete cultures. Fra2-independent control of these regulons is also observed but CTH2 itself appears to be atypically Fra2-dependent. However, control of Cth2-target transcripts which is independent of CTH2 transcript abundance or of Fra2, is also quantified. Use of copper-BPQ supports a substantial contribution of metabolite repression to iron-regulation. PMID:24895027

  13. Testosterone-related cortical maturation across childhood and adolescence.

    PubMed

    Nguyen, Tuong-Vi; McCracken, James; Ducharme, Simon; Botteron, Kelly N; Mahabir, Megan; Johnson, Wendy; Israel, Mimi; Evans, Alan C; Karama, Sherif

    2013-06-01

    Neuroendocrine theories of brain development hold testosterone as the predominant factor mediating sex-specific cortical growth and the ensuing lateralization of hemispheric function. However, studies to date have focussed on prenatal testosterone rather than pubertal changes in testosterone. Yet, animal studies have shown a high density of androgen-sensitive receptors in multiple key cortical areas, and puberty is known to coincide with both a significant rise in testosterone and the emergence of behavioral sex differences, suggesting peripubertal influences of testosterone on brain development. Here, we used linear mixed models to examine sex-specific cortical maturation associated with changes in testosterone levels in a longitudinal sample of developmentally healthy children and adolescents. A significant "sex by age by testosterone" interaction on cortical thickness (CTh) involving widespread areas of the developing brain was found. Testosterone levels were associated with CTh changes in regions of the left hemisphere in males and of the right hemisphere in females. In both sexes, the relationship between testosterone and CTh varied across the age span. These findings show the association between testosterone and CTh to be complex, highly dynamic, and to vary, depending on sex and age; they also suggest sex-related hemispheric lateralization effects of testosterone in humans.

  14. High-beta extended MHD simulations of stellarators

    NASA Astrophysics Data System (ADS)

    Bechtel, T. A.; Hegna, C. C.; Sovinec, C. R.; Roberds, N. A.

    2016-10-01

    The high beta properties of stellarator plasmas are studied using the nonlinear, extended MHD code NIMROD. In this work, we describe recent developments to the semi-implicit operator which allow the code to model 3D plasma evolution with better accuracy and efficiency. The configurations under investigation are an l=2, M=5 torsatron with geometry modeled after the Compact Toroidal Hybrid (CTH) experiment and an l=2, M=10 torsatron capable of having vacuum rotational transform profiles near unity. High-beta plasmas are created using a volumetric heating source and temperature dependent anisotropic thermal conduction and resistivity. To reduce computation expenses, simulations are initialized from stellarator symmetric pseudo-equilibria by turning on symmetry breaking modes at finite beta. The onset of MHD instabilities and nonlinear consequences are monitored as a function of beta as well as the fragility of the magnetic surfaces. Research supported by US DOE under Grant No. DE-FG02-99ER54546.

  15. Cystathionine metabolic enzymes play a role in the inflammation resolution of human keratinocytes in response to sub-cytotoxic formaldehyde exposure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Eunyoung

    Low-level formaldehyde exposure is inevitable in industrialized countries. Although daily-life formaldehyde exposure level is practically impossible to induce cell death, most of mechanistic studies related to formaldehyde toxicity have been performed in cytotoxic concentrations enough to trigger cell death mechanism. Currently, toxicological mechanisms underlying the sub-cytotoxic exposure to formaldehyde are not clearly elucidated in skin cells. In this study, the genome-scale transcriptional analysis in normal human keratinocytes (NHKs) was performed to investigate cutaneous biological pathways associated with daily life formaldehyde exposure. We selected the 175 upregulated differentially expressed genes (DEGs) and 116 downregulated DEGs in NHKs treated with 200 μMmore » formaldehyde. In the Gene Ontology (GO) enrichment analysis of the 175 upregulated DEGs, the endoplasmic reticulum (ER) unfolded protein response (UPR) was identified as the most significant GO biological process in the formaldeyde-treated NHKs. Interestingly, the sub-cytotoxic formaldehyde affected NHKs to upregulate two enzymes important in the cellular transsulfuration pathway, cystathionine γ-lyase (CTH) and cystathionine-β-synthase (CBS). In the temporal expression analysis, the upregulation of the pro-inflammatory DEGs such as MMP1 and PTGS2 was detected earlier than that of CTH, CBS and other ER UPR genes. The metabolites of CTH and CBS, L-cystathionine and L-cysteine, attenuated the formaldehyde-induced upregulation of pro-inflammatory DEGs, MMP1, PTGS2, and CXCL8, suggesting that CTH and CBS play a role in the negative feedback regulation of formaldehyde-induced pro-inflammatory responses in NHKs. In this regard, the sub-cytotoxic formaldehyde-induced CBS and CTH may regulate inflammation fate decision to resolution by suppressing the early pro-inflammatory response. - Highlights: • Sub-cytotoxic formaldehyde upregulates ER UPR-associated genes in NHKs. • Formaldehyde-induced ER UPR genes includes cystathionine γ-lyase (CTH). • Sub-cytotoxic formaldehyde upregulates cystathionine-β-synthase (CBS) in NHKs. • Cystathionine metabolic enzymes may attenuate formaldehyde-induced inflammation in NHKs. • Cystathionine metabolic enzymes may play a role in the resolution of inflammation in NHKs.« less

  16. Design and Implementation of a 200kW, 28GHz gyrotron system for the Compact Toroidal Hybrid Experiment

    NASA Astrophysics Data System (ADS)

    Hartwell, G. J.; Knowlton, S. F.; Ennis, D. A.; Maurer, D. A.; Bigelow, T.

    2016-10-01

    The Compact Toroidal Hybrid (CTH) is an l = 2 , m = 5 torsatron/tokamak hybrid (R0 = 0.75 m, ap 0.2 m, and | B | <= 0.7 T). It can generate its highly configurable confining magnetic fields solely with external coils, but typically operates with up to 80 kA of ohmically-generated plasma current for heating. New studies of edge plasma transport in stellarator geometries will benefit from CTH operating as a pure torsatron with a high temperature edge plasma. Accordingly, a 28 GHz, 200 kW gyrotron operating at 2nd harmonic for ECRH is being installed to supplement the existing 15 kW klystron system operating at the fundamental frequency; the latter will be used to initially generate the plasma. Ray-tracing calculations that guide the selection of launching position, antenna focal length, and beam-steering characteristics of the ECRH have been performed with the TRAVIS code [ 1 ] . The calculated absorption is up to 95.7% for vertically propagating rays, however, the absorption is more sensitive to magnetic field variations than for a side launch where the field gradient is tokamak-like. The design of the waveguide path and components for the top-launch scenario will be presented. This work is supported by U.S. Department of Energy Grant No. DE-FG02-00ER54610.

  17. Thomson scattering diagnostic on the Compact Toroidal Hybrid Experiment

    NASA Astrophysics Data System (ADS)

    Traverso, Peter; Maurer, D. A.; Ennis, D. A.; Hartwell, G. J.

    2016-10-01

    A Thomson scattering system is being commissioned for the non-axisymmetric plasmas of the Compact Toroidal Hybrid (CTH), a five-field period current-carrying torsatron. The system takes a single point measurement at the magnetic axis to both calibrate the two- color soft x-ray Te system and serve as an additional diagnostic for the V3FIT 3D equilibrium reconstruction code. A single point measurement will reduce the uncertainty in the reconstructed peak pressure by an order of magnitude for both current-carrying plasmas and future gyrotron-heated stellarator plasmas. The beam, generated by a frequency doubled Continuum 2 J, Nd:YaG laser, is passed vertically through an entrance Brewster window and a two-aperture optical baffle system to minimize stray light. The beam line propagates 8 m to the CTH device mid-plane with the beam diameter < 3 mm inside the plasma volume. Thomson scattered light is collected by two adjacent f/2 plano-convex condenser lenses and focused onto a custom fiber bundle. The fiber is then re-bundled and routed to a Holospec f/1.8 spectrograph to collect the red-shifted scattered light from 535-565 nm. The system has been designed to measure plasmas with core Te of 100 to 200 eV and densities of 5 ×1018 to 5 ×1019 m-3. Work supported by USDOE Grant DE-FG02-00ER54610.

  18. Analysis of Multi-Layered Materials Under High Velocity Impact Using CTH

    DTIC Science & Technology

    2008-03-01

    of state . The other relationship deals with the deviatoric stress and is taken care of by the constitutive equations which are discussed in the next...models in CTH decompose the total stress tensor into the spherical and deviatoric parts. The spherical part of the stress tensor is the equation of state ...investigate the effects of wave propagation. Waves in rods are considered to create a state of

  19. Er:YAG and CTH:YAG laser radiation: contact versus non-contact enamel ablation and sonic-activated bulk composite placement

    NASA Astrophysics Data System (ADS)

    Buckova, M.; Kasparova, M.; Dostalova, T.; Jelinkova, H.; Sulc, J.; Nemec, M.; Fibrich, M.; Bradna, P.; Miyagi, M.

    2013-05-01

    Laser radiation can be used for effective caries removal and cavity preparation without significant thermal effects, collateral damage of tooth structure, or patient discomfort. The aim of this study was to compare the quality of tissue after contact or non-contact Er:YAG and CTH:YAG laser radiation ablation. The second goal was to increase the sealing ability of hard dental tissues using sonic-activated bulk filling material with change in viscosity during processing. The artificial caries was prepared in intact teeth to simulate a demineralized surface and then the Er:YAG or CTH:YAG laser radiation was applied. The enamel artificial caries was gently removed by the laser radiation and sonic-activated composite fillings were inserted. A stereomicroscope and then a scanning electron microscope were used to evaluate the enamel surface. Er:YAG contact mode ablation in enamel was quick and precise; the cavity was smooth with a keyhole shaped prism and rod relief arrangement without a smear layer. The sonic-activated filling material was consistently regularly distributed; no cracks or microleakage in the enamel were observed. CTH:YAG irradiation was able to clean but not ablate the enamel surface; in contact and also in non-contact mode there was evidence of melting and fusing of the enamel.

  20. Characteristics of Low-q(a) Disruptions in the Compact Toroidal Hybrid

    NASA Astrophysics Data System (ADS)

    Pandya, M. D.; Archmiller, M. C.; Ennis, D. A.; Hartwell, G. J.; Maurer, D. A.

    2014-10-01

    Tokamak disruptions are dramatic events that lead to a sudden loss of plasma confinement. Disruptions that occur at low edge safety-factor, q (a) , limit the operation of tokamaks to q (a) >= 2 . The Compact Toroidal Hybrid (CTH) is a torsatron-tokamak hybrid with a helical field coil and vertical field coils to establish a stellartor equilibrium, while an ohmic coil induces plasma current. A feature of the CTH device is the ability to adjust the vacuum rotational transform, tvac (t =1/q ), by varying the ratio of current in the helical and toroidal field coils. The value of edge tvac can be varied from about 0.02 to 0.3 (qvac (a) ~ 50 to 3.3). Plasma discharges in CTH are routinely observed to operate with q (a) < 2 , and in some cases as low as q (a) ~ 1 . 1 . In CTH, low-q(a) disruptions are observed with a dominant m/n=3/2 precursor. The disruptivity of plasma discharges is over 80% when tvac (a) < 0 . 04 (qvac (a) < 25) and as tvac (a) is increased further, the disruptivity of the plasma discharges decreases. The disruptions are completely suppressed for tvac (a) > 0 . 07 (qvac (a) ~ 14) . This work is supported by US Department of Energy Grant No. DE-FG02-00ER54610.

  1. Alterations of grey matter asymmetries in adolescents with prelingual deafness: a combined VBM and cortical thickness analysis.

    PubMed

    Li, Wenjing; Li, Jianhong; Xian, Junfang; Lv, Bin; Li, Meng; Wang, Chunheng; Li, Yong; Liu, Zhaohui; Liu, Sha; Wang, Zhenchang; He, Huiguang; Sabel, Bernhard A

    2013-01-01

    Prelingual deafness has been shown to lead to brain reorganization as demonstrated by functional parameters, but anatomical evidences still remain controversial. The present study investigated hemispheric asymmetry changes in deaf subjects using MRI, hypothesizing auditory-, language- or visual-related regions after early deafness. Prelingually deaf adolescents (n = 16) and age- and gender-matched normal controls (n = 16) were recruited and hemispheric asymmetry was evaluated with voxel-based morphometry (VBM) from MRI combined with analysis of cortical thickness (CTh). Deaf adolescents showed more rightward asymmetries (L < R) of grey matter volume (GMV) in the cerebellum and more leftward CTh asymmetries (L > R) in the posterior cingulate gyrus and gyrus rectus. More rightward CTh asymmetries were observed in the precuneus, middle and superior frontal gyri, and middle occipital gyrus. The duration of hearing aid use was correlated with asymmetry of GMV in the cerebellum and CTh in the gyrus rectus. Interestingly, the asymmetry of the auditory cortex was preserved in deaf subjects. When the brain is deprived of auditory input early in life there are signs of both irreversible morphological asymmetry changes in different brain regions but also signs of reorganization and plasticity which are dependent on hearing aid use, i.e. use-dependent.

  2. Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions

    NASA Astrophysics Data System (ADS)

    Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter

    2017-11-01

    Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.

  3. ALEGRA -- A massively parallel h-adaptive code for solid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Summers, R.M.; Wong, M.K.; Boucheron, E.A.

    1997-12-31

    ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Usingmore » this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.« less

  4. Reduced cortical thickness and increased surface area in antisocial personality disorder.

    PubMed

    Jiang, Weixiong; Li, Gang; Liu, Huasheng; Shi, Feng; Wang, Tao; Shen, Celina; Shen, Hui; Lee, Seong-Whan; Hu, Dewen; Wang, Wei; Shen, Dinggang

    2016-11-19

    Antisocial personality disorder (ASPD), one of whose characteristics is high impulsivity, is of great interest in the field of brain structure and function. However, little is known about possible impairments in the cortical anatomy in ASPD, in terms of cortical thickness (CTh) and surface area (SA), as well as their possible relationship with impulsivity. In this neuroimaging study, we first investigated the changes of CTh and SA in ASPD patients, in comparison to those of healthy controls, and then performed correlation analyses between these measures and the ability of impulse control. We found that ASPD patients showed thinner cortex while larger SA in several specific brain regions, i.e., bilateral superior frontal gyrus (SFG), orbitofrontal and triangularis, insula cortex, precuneus, middle frontal gyrus (MFG), middle temporal gyrus (MTG), and left bank of superior temporal sulcus (STS). In addition, we also found that the ability of impulse control was positively correlated with CTh in the SFG, MFG, orbitofrontal cortex (OFC), pars triangularis, superior temporal gyrus (STG), and insula cortex. To our knowledge, this study is the first to reveal simultaneous changes in CTh and SA in ASPD, as well as their relationship with impulsivity. These cortical structural changes may introduce uncontrolled and callous behavioral characteristic in ASPD patients, and these potential biomarkers may be very helpful in understanding the pathomechanism of ASPD. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. Cirrus cloud retrieval from MSG/SEVIRI during day and night using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Strandgren, Johan; Bugliaro, Luca

    2017-04-01

    By covering a large part of the Earth, cirrus clouds play an important role in climate as they reflect incoming solar radiation and absorb outgoing thermal radiation. Nevertheless, the cirrus clouds remain one of the largest uncertainties in atmospheric research and the understanding of the physical processes that govern their life cycle is still poorly understood, as is their representation in climate models. To monitor and better understand the properties and physical processes of cirrus clouds, it's essential that those tenuous clouds can be observed from geostationary spaceborne imagers like SEVIRI (Spinning Enhanced Visible and InfraRed Imager), that possess a high temporal resolution together with a large field of view and play an important role besides in-situ observations for the investigation of cirrus cloud processes. CiPS (Cirrus Properties from Seviri) is a new algorithm targeting thin cirrus clouds. CiPS is an artificial neural network trained with coincident SEVIRI and CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) observations in order to retrieve a cirrus cloud mask along with the cloud top height (CTH), ice optical thickness (IOT) and ice water path (IWP) from SEVIRI. By utilizing only the thermal/IR channels of SEVIRI, CiPS can be used during day and night making it a powerful tool for the cirrus life cycle analysis. Despite the great challenge of detecting thin cirrus clouds and retrieving their properties from a geostationary imager using only the thermal/IR wavelengths, CiPS performs well. Among the cirrus clouds detected by CALIOP, CiPS detects 70 and 95 % of the clouds with an optical thickness of 0.1 and 1.0 respectively. Among the cirrus free pixels, CiPS classify 96 % correctly. For the CTH retrieval, CiPS has a mean absolute percentage error of 10 % or less with respect to CALIOP for cirrus clouds with a CTH greater than 8 km. For the IOT retrieval, CiPS has a mean absolute percentage error of 100 % or less with respect to CALIOP for cirrus clouds with an optical thickness down to 0.07. For such thin cirrus clouds an error of 100 % should be regarded as low from a geostationary imager like SEVIRI. The IWP retrieved by CiPS shows a similar performance, but has larger deviations for the thinner cirrus clouds.

  6. A physically based algorithm for non-blackbody correction of the cloud top temperature for the convective clouds

    NASA Astrophysics Data System (ADS)

    Wang, C.; Luo, Z. J.; Chen, X.; Zeng, X.; Tao, W.; Huang, X.

    2012-12-01

    Cloud top temperature is a key parameter to retrieval in the remote sensing of convective clouds. Passive remote sensing cannot directly measure the temperature at the cloud tops. Here we explore a synergistic way of estimating cloud top temperature by making use of the simultaneous passive and active remote sensing of clouds (in this case, CloudSat and MODIS). Weighting function of the MODIS 11μm band is explicitly calculated by feeding cloud hydrometer profiles from CloudSat retrievals and temperature and humidity profiles based on ECMWF ERA-interim reanalysis into a radiation transfer model. Among 19,699 tropical deep convective clouds observed by the CloudSat in 2008, the averaged effective emission level (EEL, where the weighting function attains its maximum) is at optical depth 0.91 with a standard deviation of 0.33. Furthermore, the vertical gradient of CloudSat radar reflectivity, an indicator of the fuzziness of convective cloud top, is linearly proportional to, d_{CTH-EEL}, the distance between the EEL of 11μm channel and cloud top height (CTH) determined by the CloudSat when d_{CTH-EEL}<0.6km. Beyond 0.6km, the distance has little sensitivity to the vertical gradient of CloudSat radar reflectivity. Based on these findings, we derive a formula between the fuzziness in the cloud top region, which is measurable by CloudSat, and the MODIS 11μm brightness temperature assuming that the difference between effective emission temperature and the 11μm brightness temperature is proportional to the cloud top fuzziness. This formula is verified using the simulated deep convective cloud profiles by the Goddard Cumulus Ensemble model. We further discuss the application of this formula in estimating cloud top buoyancy as well as the error characteristics of the radiative calculation within such deep-convective clouds.

  7. Equations of state for explosive detonation products: The PANDA model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerley, G.I.

    1994-05-01

    This paper discusses a thermochemical model for calculating equations of state (EOS) for the detonation products of explosives. This model, which was first presented at the Eighth Detonation Symposium, is available in the PANDA code and is referred to here as ``the Panda model``. The basic features of the PANDA model are as follows. (1) Statistical-mechanical theories are used to construct EOS tables for each of the chemical species that are to be allowed in the detonation products. (2) The ideal mixing model is used to compute the thermodynamic functions for a mixture of these species, and the composition ofmore » the system is determined from assumption of chemical equilibrium. (3) For hydrocode calculations, the detonation product EOS are used in tabular form, together with a reactive burn model that allows description of shock-induced initiation and growth or failure as well as ideal detonation wave propagation. This model has been implemented in the three-dimensional Eulerian code, CTH.« less

  8. MISR CMVs and Multiangular Views of Tropical Cyclone Inner-Core Dynamics

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.; Diner, David J.; Garay, Michael J; Jovanovic, Veljko M.; Lee, Jae N.; Moroney, Catherine M.; Mueller, Kevin J.; Nelson, David L.

    2010-01-01

    Multi-camera stereo imaging of cloud features from the MISR (Multiangle Imaging SpectroRadiometer) instrument on NASA's Terra satellite provides accurate and precise measurements of cloud top heights (CTH) and cloud motion vector (CMV) winds. MISR observes each cloudy scene from nine viewing angles (Nadir, +/-26(sup o), +/-46(sup o), +/-60(sup o), +/-70(sup o)) with approximatel 275-m pixel resolution. This paper provides an update on MISR CMV and CTH algorithm improvements, and explores a high-resolution retrieval of tangential winds inside the eyewall of tropical cyclones (TC). The MISR CMV and CTH retrievals from the updated algorithm are significantly improved in terms of spatial coverage and systematic errors. A new product, the 1.1-km cross-track wind, provides high accuracy and precision in measuring convective outflows. Preliminary results obtained from the 1.1-km tangential wind retrieval inside the TC eyewall show that the inner-core rotation is often faster near the eyewall, and this faster rotation appears to be related linearly to cyclone intensity.

  9. In situ identification of the synthrophic protein fermentative Coprothermobacter spp. involved in the thermophilic anaerobic digestion process.

    PubMed

    Gagliano, Maria Cristina; Braguglia, Camilla Maria; Rossetti, Simona

    2014-09-01

    Thermophilic bacteria have recently attracted great attention because of their potential application in improving different biochemical processes such as anaerobic digestion of various substrates, wastewater treatment or hydrogen production. In this study we report on the design of a specific 16S rRNA-targeted oligonucleotide probe for detecting members of Coprothermobacter genus characterized by a strong protease activity to degrade proteins and peptides. The newly designed CTH485 probe and helper probes hCTH429 and hCTH439 were optimized for use in fluorescence in situ hybridization (FISH) on thermophilic anaerobic sludge samples. In situ probing revealed that thermo-adaptive mechanisms shaping the 16S rRNA gene may affect the identification of thermophilic microorganisms. The novel developed FISH probe extends the possibility to study the widespread thermophilic syntrophic interaction of Coprothermobacter spp. with hydrogenotrophic methanogenic archaea, whose establishment is a great benefit for the whole anaerobic system. © 2014 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.

  10. Many Point Optical Velocimetry for Gas Gun Applications

    NASA Astrophysics Data System (ADS)

    Pena, Michael; Becker, Steven; Garza, Anselmo; Hanache, Michael; Hixson, Robert; Jennings, Richard; Matthes, Melissa; O'Toole, Brendan; Roy, Shawoon; Trabia, Mohamed

    2015-06-01

    With the emergence of the multiplexed photonic Doppler velocimeter (MPDV), it is now practical to record many velocity traces simultaneously on shock physics experiments. Optical measurements of plastic deformation during high velocity impact have historically been constrained to a few measurement points. We have applied a 32-channel MPDV system to gas gun experiments in order to measure plastic deformation of a steel plate. A two dimensional array of measurement points allowed for diagnostic coverage over a large surface area of the target plate. This provided experimental flexibility to accommodate platform uncertainties as well as provide for a wealth of data from a given experiment. The two dimensional array of measurement points was imaged from an MT fiber-optic connector using off-the-shelf optical components to allow for an economical and easy-to-assemble, many-fiber probe. A two-stage, light gas gun was used to launch a Lexan projectile at velocities ranging from 4 to 6 km/s at a 12.7 mm thick A36 steel plate. Plastic deformation of the back surface was measured and compared with simulations from two different models: LS-DYNA and CTH. Comparison of results indicates that the computational analysis using both codes can reasonably simulate experiments of this type.

  11. In silico investigation of blast-induced intracranial fluid cavitation as it potentially leads to traumatic brain injury

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haniff, S.; Taylor, P. A.

    In this paper, we conducted computational macroscale simulations predicting blast-induced intracranial fluid cavitation possibly leading to brain injury. To further understanding of this problem, we developed microscale models investigating the effects of blast-induced cavitation bubble collapse within white matter axonal fiber bundles of the brain. We model fiber tracks of myelinated axons whose diameters are statistically representative of white matter. Nodes of Ranvier are modeled as unmyelinated sections of axon. Extracellular matrix envelops the axon fiber bundle, and gray matter is placed adjacent to the bundle. Cavitation bubbles are initially placed assuming an intracranial wave has already produced them. Pressuremore » pulses, of varied strengths, are applied to the upper boundary of the gray matter and propagate through the model, inducing bubble collapse. Simulations, conducted using the shock wave physics code CTH, predict an increase in pressure and von Mises stress in axons downstream of the bubbles after collapse. This appears to be the result of hydrodynamic jetting produced during bubble collapse. Interestingly, results predict axon cores suffer significantly lower shear stresses from proximal bubble collapse than does their myelin sheathing. Finally, simulations also predict damage to myelin sheathing, which, if true, degrades axonal electrical transmissibility and general health of the white matter structures in the brain.« less

  12. In silico investigation of blast-induced intracranial fluid cavitation as it potentially leads to traumatic brain injury

    DOE PAGES

    Haniff, S.; Taylor, P. A.

    2017-10-17

    In this paper, we conducted computational macroscale simulations predicting blast-induced intracranial fluid cavitation possibly leading to brain injury. To further understanding of this problem, we developed microscale models investigating the effects of blast-induced cavitation bubble collapse within white matter axonal fiber bundles of the brain. We model fiber tracks of myelinated axons whose diameters are statistically representative of white matter. Nodes of Ranvier are modeled as unmyelinated sections of axon. Extracellular matrix envelops the axon fiber bundle, and gray matter is placed adjacent to the bundle. Cavitation bubbles are initially placed assuming an intracranial wave has already produced them. Pressuremore » pulses, of varied strengths, are applied to the upper boundary of the gray matter and propagate through the model, inducing bubble collapse. Simulations, conducted using the shock wave physics code CTH, predict an increase in pressure and von Mises stress in axons downstream of the bubbles after collapse. This appears to be the result of hydrodynamic jetting produced during bubble collapse. Interestingly, results predict axon cores suffer significantly lower shear stresses from proximal bubble collapse than does their myelin sheathing. Finally, simulations also predict damage to myelin sheathing, which, if true, degrades axonal electrical transmissibility and general health of the white matter structures in the brain.« less

  13. Evaluation of XHVRB for Capturing Explosive Shock Desensitization

    NASA Astrophysics Data System (ADS)

    Tuttle, Leah; Schmitt, Robert; Kittell, Dave; Harstad, Eric

    2017-06-01

    Explosive shock desensitization phenomena have been recognized for some time. It has been demonstrated that pressure-based reactive flow models do not adequately capture the basic nature of the explosive behavior. Historically, replacing the local pressure with a shock captured pressure has dramatically improved the numerical modeling approaches. Models based upon shock pressure or functions of entropy have recently been developed. A pseudo-entropy based formulation using the History Variable Reactive Burn model, as proposed by Starkenberg, was implemented into the Eulerian shock physics code CTH. Improvements in the shock capturing algorithm were made. The model is demonstrated to reproduce single shock behavior consistent with published pop plot data. It is also demonstrated to capture a desensitization effect based on available literature data, and to qualitatively capture dead zones from desensitization in 2D corner turning experiments. This models shows promise for use in modeling and simulation problems that are relevant to the desensitization phenomena. Issues are identified with the current implementation and future work is proposed for improving and expanding model capabilities. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  14. Evaluation of Impact Damage to the Burster Detonation Vessel Caused by Fragments from a Drained M121A1 Chemical Munition Detonated with an Initiation Charge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    KIPP, MARLIN E.

    2001-12-01

    Explosive charges placed on the fuze end of a drained chemical munition are expected to be used as a means to destroy the fuze and burster charges of the munition. Analyses are presented to evaluate the effect of these additional initiation charges on the fragmentation characteristics for the M121A1 155mm chemical munition, modeled with a T244 fuze attached, and to assess the consequences of these fragment impacts on the walls of a containment chamber--the Burster Detonation Vessel. A numerical shock physics code (CTH) is used to characterize the mass and velocity of munition fragments. Both two- and three-dimensional simulations ofmore » the munition have been completed in this study. Based on threshold fragment velocity/mass results drawn from both previous and current analyses, it is determined that under all fragment impact conditions from the munition configurations considered in this study, no perforation of the inner chamber wall will occur, and the integrity of the Burster Detonation Vessel is retained. However, the munition case fragments have sufficient mass and velocity to locally damage the surface of the inner wall of the containment vessel.« less

  15. In silico investigation of blast-induced intracranial fluid cavitation as it potentially leads to traumatic brain injury

    NASA Astrophysics Data System (ADS)

    Haniff, S.; Taylor, P. A.

    2017-11-01

    We conducted computational macroscale simulations predicting blast-induced intracranial fluid cavitation possibly leading to brain injury. To further understanding of this problem, we developed microscale models investigating the effects of blast-induced cavitation bubble collapse within white matter axonal fiber bundles of the brain. We model fiber tracks of myelinated axons whose diameters are statistically representative of white matter. Nodes of Ranvier are modeled as unmyelinated sections of axon. Extracellular matrix envelops the axon fiber bundle, and gray matter is placed adjacent to the bundle. Cavitation bubbles are initially placed assuming an intracranial wave has already produced them. Pressure pulses, of varied strengths, are applied to the upper boundary of the gray matter and propagate through the model, inducing bubble collapse. Simulations, conducted using the shock wave physics code CTH, predict an increase in pressure and von Mises stress in axons downstream of the bubbles after collapse. This appears to be the result of hydrodynamic jetting produced during bubble collapse. Interestingly, results predict axon cores suffer significantly lower shear stresses from proximal bubble collapse than does their myelin sheathing. Simulations also predict damage to myelin sheathing, which, if true, degrades axonal electrical transmissibility and general health of the white matter structures in the brain.

  16. Cortical Thickness, Surface Area and Subcortical Volume Differentially Contribute to Cognitive Heterogeneity in Parkinson's Disease.

    PubMed

    Gerrits, Niels J H M; van Loenhoud, Anita C; van den Berg, Stan F; Berendse, Henk W; Foncke, Elisabeth M J; Klein, Martin; Stoffers, Diederick; van der Werf, Ysbrand D; van den Heuvel, Odile A

    2016-01-01

    Parkinson's disease (PD) is often associated with cognitive deficits, although their severity varies considerably between patients. Recently, we used voxel-based morphometry (VBM) to show that individual differences in gray matter (GM) volume relate to cognitive heterogeneity in PD. VBM does, however, not differentiate between cortical thickness (CTh) and surface area (SA), which might be independently affected in PD. We therefore re-analyzed our cohort using the surface-based method FreeSurfer, and investigated (i) CTh, SA, and (sub)cortical GM volume differences between 93 PD patients and 45 matched controls, and (ii) the relation between these structural measures and cognitive performance on six neuropsychological tasks within the PD group. We found cortical thinning in PD patients in the left pericalcarine gyrus, extending to cuneus, precuneus and lingual areas and left inferior parietal cortex, bilateral rostral middle frontal cortex, and right cuneus, and increased cortical surface area in the left pars triangularis. Within the PD group, we found negative correlations between (i) CTh of occipital areas and performance on a verbal memory task, (ii) SA and volume of the frontal cortex and visuospatial memory performance, and, (iii) volume of the right thalamus and scores on two verbal fluency tasks. Our primary findings illustrate that i) CTh and SA are differentially affected in PD, and ii) VBM and FreeSurfer yield non-overlapping results in an identical dataset. We argue that this discrepancy is due to technical differences and the subtlety of the PD-related structural changes.

  17. A strong adjuvant based on glycol-chitosan-coated lipid-polymer hybrid nanoparticles potentiates mucosal immune responses against the recombinant Chlamydia trachomatis fusion antigen CTH522.

    PubMed

    Rose, Fabrice; Wern, Jeanette Erbo; Gavins, Francesca; Andersen, Peter; Follmann, Frank; Foged, Camilla

    2018-02-10

    Induction of mucosal immunity with vaccines is attractive for the immunological protection against pathogen entry directly at the site of infection. An example is infection with Chlamydia trachomatis (Ct), which is the most common sexually transmitted infection in the world, and there is an unmet medical need for an effective vaccine. A vaccine against Ct should elicit protective humoral and cell-mediated immune (CMI) responses in the genital tract mucosa. We previously designed an antibody- and CMI-inducing adjuvant based on poly(dl-lactic-co-glycolic acid) (PLGA) nanoparticles modified with the cationic surfactant dimethyldioctadecylammonium bromide and the immunopotentiator trehalose-6,6'-dibehenate. Here we show that immunization with these lipid-polymer hybrid nanoparticles (LPNs) coated with the mucoadhesive polymer chitosan enhances mucosal immune responses. Glycol chitosan (GC)-modified LPNs were engineered using an oil-in-water single emulsion solvent evaporation method. The nanoparticle design was optimized in a highly systematic way by using a quality-by-design approach to define the optimal operating space and to gain maximal mechanistic information about the GC coating of the LPNs. Cryo-transmission electron microscopy revealed a PLGA core coated with one or several concentric lipid bilayers. The GC coating of the surface was identified as a saturable, GC concentration-dependent increase in particle size and a reduction of the zeta-potential, and the coating layer could be compressed upon addition of salt. Increased antigen-specific mucosal immune responses were induced in the lungs and the genital tract with the optimized GC-coated LPN adjuvant upon nasal immunization of mice with the recombinant Ct fusion antigen CTH522. The mucosal responses were characterized by CTH522-specific IgG/IgA antibodies, together with CTH522-specific interferon γ-producing Th1 cells. This study demonstrates that mucosal administration of CTH522 adjuvanted with chitosan-coated LPNs represents a promising strategy to modulate the magnitude of mucosal vaccine responses. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Human trophoblast-derived hydrogen sulfide stimulates placental artery endothelial cell angiogenesis.

    PubMed

    Chen, Dong-Bao; Feng, Lin; Hodges, Jennifer K; Lechuga, Thomas J; Zhang, Honghai

    2017-09-01

    Endogenous hydrogen sulfide (H2S), mainly synthesized by cystathionine β-synthase (CBS) and cystathionine γ-lyase (CTH), has been implicated in regulating placental angiogenesis; however, the underlying mechanisms are unknown. This study was to test a hypothesis that trophoblasts synthesize H2S to promote placental angiogenesis. Human choriocarcinoma-derived BeWo cells expressed both CBS and CTH proteins, while the first trimester villous trophoblast-originated HTR-8/SVneo cells expressed CTH protein only. The H2S producing ability of BeWo cells was significantly inhibited by either inhibitors of CBS (carboxymethyl hydroxylamine hemihydrochloride, CHH) or CTH (β-cyano-L-alanine, BCA) and that in HTR-8/SVneo cells was inhibited by CHH only. H2S donors stimulated cell proliferation, migration, and tube formation in ovine placental artery endothelial cells (oFPAECs) as effectively as vascular endothelial growth factor. Co-culture with BeWo and HTR-8/SVneo cells stimulated oFPAEC migration, which was inhibited by CHH or BCA in BeWo but CHH only in HTR-8/SVneo cells. Primary human villous trophoblasts (HVT) were more potent than trophoblast cell lines in stimulating oFPAEC migration that was inhibited by CHH and CHH/BCA combination in accordance with its H2S synthesizing activity linked to CBS and CTH expression patterns. H2S donors activated endothelial nitric oxide synthase (NOS3), v-AKT murine thymoma viral oncogene homolog 1 (AKT1), and extracellular signal-activated kinase 1/2 (mitogen-activated protein kinase 3/1, MAPK3/1) in oFPAECs. H2S donor-induced NOS3 activation was blocked by AKT1 but not MAPK3/1 inhibition. In keeping with our previous studies showing a crucial role of AKT1, MAPK3/1, and NOS3/NO in placental angiogenesis, these data show that trophoblast-derived endogenous H2S stimulates placental angiogenesis, involving activation of AKT1, NOS3/NO, and MAPK3/1. © The Authors 2017. Published by Oxford University Press on behalf of Society for the Study of Reproduction. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. The Relation Between Capillary Transit Times and Hemoglobin Saturation Heterogeneity. Part 1: Theoretical Models

    PubMed Central

    Lücker, Adrien; Secomb, Timothy W.; Weber, Bruno; Jenny, Patrick

    2018-01-01

    Capillary dysfunction impairs oxygen supply to parenchymal cells and often occurs in Alzheimer's disease, diabetes and aging. Disturbed capillary flow patterns have been shown to limit the efficacy of oxygen extraction and can be quantified using capillary transit time heterogeneity (CTH). However, the transit time of red blood cells (RBCs) through the microvasculature is not a direct measure of their capacity for oxygen delivery. Here we examine the relation between CTH and capillary outflow saturation heterogeneity (COSH), which is the heterogeneity of blood oxygen content at the venous end of capillaries. Models for the evolution of hemoglobin saturation heterogeneity (HSH) in capillary networks were developed and validated using a computational model with moving RBCs. Two representative situations were selected: a Krogh cylinder geometry with heterogeneous hemoglobin saturation (HS) at the inflow, and a parallel array of four capillaries. The heterogeneity of HS after converging capillary bifurcations was found to exponentially decrease with a time scale of 0.15–0.21 s due to diffusive interaction between RBCs. Similarly, the HS difference between parallel capillaries also drops exponentially with a time scale of 0.12–0.19 s. These decay times are substantially smaller than measured RBC transit times and only weakly depend on the distance between microvessels. This work shows that diffusive interaction strongly reduces COSH on a small spatial scale. Therefore, we conclude that CTH influences COSH yet does not determine it. The second part of this study will focus on simulations in microvascular networks from the rodent cerebral cortex. Actual estimates of COSH and CTH will then be given. PMID:29755365

  20. Rational Design, Development, and Stability Assessment of a Macrocyclic Four-Hydroxamate-Bearing Bifunctional Chelating Agent for 89 Zr.

    PubMed

    Seibold, Uwe; Wängler, Björn; Wängler, Carmen

    2017-09-21

    Zirconium-89 is a positron-emitting radionuclide of high interest for medical imaging applications with positron emission tomography (PET). For the introduction of this radiometal into biologically active targeting vectors, the chelating agent desferrioxamine B (DFO) is commonly applied. However, DFO is known to form 89 Zr complexes of limited in vivo stability. Herein we describe the rational design and chemical development of a new macrocyclic four-hydroxamate-bearing chelating agent-1,10,19,28-tetrahydroxy-1,5,10,14,19,23,28,32-octaazacyclohexatriacontan-2,6,11,15,20,24,29,33-octaone (CTH36)-for the stable complexation of Zr 4+ . For this purpose, we first performed computational studies to determine the optimal chelator geometry before we developed different synthesis pathways toward the target structures. The best results were obtained using an efficient solution-phase-based synthesis strategy toward the target chelating agent. To enable efficient and chemoselective conjugation to biomolecules, a tetrazine-modified variant of CTH36 was also developed. The excellent conjugation characteristics of the so-functionalized chelator were demonstrated on the example of the model peptide TCO-c(RGDfK). We determined the optimal 89 Zr radiolabeling parameters for CTH36 as well as its bioconjugate, and found that 89 Zr radiolabeling proceeds efficiently under very mild reaction conditions. Finally, we performed comparative complex stability tests for 89 Zr-CHT36-c(RGDfK) and 89 Zr-DFO-c(RGDfK), showing improved complex stability for the newly developed chelator CTH36. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. The Relation Between Capillary Transit Times and Hemoglobin Saturation Heterogeneity. Part 1: Theoretical Models.

    PubMed

    Lücker, Adrien; Secomb, Timothy W; Weber, Bruno; Jenny, Patrick

    2018-01-01

    Capillary dysfunction impairs oxygen supply to parenchymal cells and often occurs in Alzheimer's disease, diabetes and aging. Disturbed capillary flow patterns have been shown to limit the efficacy of oxygen extraction and can be quantified using capillary transit time heterogeneity (CTH). However, the transit time of red blood cells (RBCs) through the microvasculature is not a direct measure of their capacity for oxygen delivery. Here we examine the relation between CTH and capillary outflow saturation heterogeneity (COSH), which is the heterogeneity of blood oxygen content at the venous end of capillaries. Models for the evolution of hemoglobin saturation heterogeneity (HSH) in capillary networks were developed and validated using a computational model with moving RBCs. Two representative situations were selected: a Krogh cylinder geometry with heterogeneous hemoglobin saturation (HS) at the inflow, and a parallel array of four capillaries. The heterogeneity of HS after converging capillary bifurcations was found to exponentially decrease with a time scale of 0.15-0.21 s due to diffusive interaction between RBCs. Similarly, the HS difference between parallel capillaries also drops exponentially with a time scale of 0.12-0.19 s. These decay times are substantially smaller than measured RBC transit times and only weakly depend on the distance between microvessels. This work shows that diffusive interaction strongly reduces COSH on a small spatial scale. Therefore, we conclude that CTH influences COSH yet does not determine it. The second part of this study will focus on simulations in microvascular networks from the rodent cerebral cortex. Actual estimates of COSH and CTH will then be given.

  2. Cerebral sex dimorphism and sexual orientation.

    PubMed

    Manzouri, Amirhossein; Savic, Ivanka

    2018-03-01

    The neurobiology of sexual orientation is frequently discussed in terms of cerebral sex dimorphism (defining both functional and structural sex differences). Yet, the information about possible cerebral differences between sex-matched homo and heterosexual persons is limited, particularly among women. In this multimodal MRI study, we addressed these issues by investigating possible cerebral differences between homo and heterosexual persons, and by asking whether there is any sex difference in this aspect. Measurements of cortical thickness (Cth), subcortical volumes, and functional and structural resting-state connections among 40 heterosexual males (HeM) and 40 heterosexual females (HeF) were compared with those of 30 homosexual males (HoM) and 30 homosexual females (HoF). Congruent with previous reports, sex differences were detected in heterosexual controls with regard to fractional anisotropy (FA), Cth, and several subcortical volumes. Homosexual groups did not display any sex differences in FA values. Furthermore, their functional connectivity was significantly less pronounced in the mesial prefrontal and precuneus regions. In these two particular regions, HoM also displayed thicker cerebral cortex than other groups, whereas HoF did not differ from HeF. In addition, in HoM the parietal Cth showed "sex-reversed" values, not observed in HoF. Homosexual orientation seems associated with a less pronounced sexual differentiation of white matter tracts and a less pronounced functional connectivity of the self-referential networks compared to heterosexual orientation. Analyses of Cth suggest that male and female homosexuality are not simple analogues of each other and that differences from heterosexual controls are more pronounced in HoM. © 2017 Wiley Periodicals, Inc.

  3. Implementation of a High Explosive Equation of State into an Eulerian Hydrocode

    NASA Astrophysics Data System (ADS)

    Littlefield, David L.; Baker, Ernest L.

    2004-07-01

    The implementation of a high explosive equation of state into the Eulerian hydrocode CTH is described. The equation of state is an extension to JWL referred to as JWLB, and is intended to model the thermodynamic state of detonation products from a high explosive reaction. The EOS was originally cast in a form p = p(ρ, e), where p is the pressure, ρ is the density and e is the internal energy. However, the target application code requires an EOS of the form p = p(ρ, T), where T is the temperature, so it was necessary to reformulate the EOS in a thermodynamically consistent manner. A Helmholtz potential, developed from the original EOS, insures this consistency. Example calculations are shown that illustrate the veracity of this implementation.

  4. T174. STRUCTURAL ABNORMALITIES IN THE CINGULATE CORTEX IN ADOLESCENTS AT ULTRA-HIGH RISK WHO LATER DEVELOP PSYCHOSIS

    PubMed Central

    Fortea, Adriana; van Eindhjoven, Phillip; Pariente, Jose; Calvo, Anna; Batalla, Albert; de la Serna, Elena; Ilzarbe, Daniel; Tor, Jordina; Dolz, Montserrat; Baeza, Inmaculada; Sugranyes, Gisela

    2018-01-01

    Abstract Background Identification of biomarkers of transition to psychosis in individuals at ultra-high risk (UHR) has the potential to improve future outcomes (McGorry, 2008). Structural MRI studies with UHR samples have revealed steeper rates of cortical thinning in temporal, prefrontal and cingulate cortices in individuals who later develop psychosis in both baseline and longitudinal comparisons (Fusar-Poli, 2011; Cannon, 2014). However, little is known about how onset of prodromal symptoms during adolescence impacts on changes in cortical thickness (CTH) (Ziermans, 2012). Methods Multicentre cross-sectional case-control study, including youth aged 10–17 years, recruited from two child and adolescent mental health centres. UHR individuals were identified using the Structured Interview for Prodromal Syndromes criteria with some modifications. Healthy controls (HC) were recruited from the same geographical area. Exclusion criteria comprised personal history of psychotic symptoms, IQ<70, autism spectrum disorder, presence of neurological disorder, or antecedents of head trauma with loss of consciousness. The study was approved by the local Ethical Review Boards. All participants underwent a comprehensive socio-demographic and clinical evaluation at baseline and after 6, 12 and 18 months follow-up to identify which individuals converted to psychosis (UHR-P) and which did not (UHR-NP). High-resolution magnetic resonance structural images were acquired at baseline on a 3Tesla and 1.5Tesla scanners. An inter-site compatibility study was conducted with healthy controls which revealed high inter-site correlation coefficients (r>.6) for CTH measures. Images were pre-processed employing automated procedures implemented in FreeSurfer 5.3.0, cortical parcellation employed the Desikan-Killiany brain atlas. Analyses: First, mean global and lobar (frontal, parietal, temporal, occipital, insula and cingulate) CTH measurements were computed. Then, within lobes showing group effects, CTH was measured for each parcellation. ANCOVA was performed to test differences between groups in SPSS 22.0, including gender, age, total intracranial volume and site as covariates. Significance was set at p<.05, corrected using the false discovery rate (FDR). Results 122 subjects were included (59 UHR-NP vs. 18 UHR-P vs. 45 HC, mean ages: 15.2 ± 1.5 vs. 15.0 ± 1.8 vs. 15.8 ± 1.5, F=1.9, p=.15; gender (%female): 61.0% vs 61.1% vs 68.9%, χ2=.76, p=.68). There were no significant differences in case-control proportion between centres: χ2=1.3, p=.25. No significant differences in global CTH in UHR-P (2.57 ± 0.13mm) relative to UHR-NP (2.56 ± 0.11mm) and HC (2.58 ± 0.09mm) were found. There was a significant group effect on the right cingulate cortex (F=6.6, pFDR=.024): UHR-P showed lower CTH in this area relative to controls (p=.007 uncorrected). Within the right cingulate cortex, a significant group effect was found in the posterior cingulate (F=5.7, pFDR=.016) and isthmus (F=4.6, pFDR=.024), and a trend level in the caudal anterior cingulate (F=2.9, p=.057): with smaller CTH in UHR-P relative to HC in the isthmus cingulate (p=.025) and the posterior cingulate (p=.066). No significant differences were observed between UHR-P and UHR-NP groups. Discussion UHR-P showed significant cortical thinning in several regions of the right cingulate cortex in comparison to HC, giving support to the notion that structural alterations in the cingulate cortex may be present in children and adolescents prior the onset of psychosis. Longitudinal changes in CTH have the potential to increase understanding of changes related to transition to clinical illness.

  5. Comparing CTH Simulations and Experiments on Explosively Loaded Rings

    NASA Astrophysics Data System (ADS)

    Braithwaite, C. H.; Aydelotte, B.; Thadhani, N. N.; Williamson, D. M.

    2011-06-01

    A series of experiments were conducted on explosively loaded rings for the purpose of studying fragmentation. In addition to the collection of fragments for analysis, the radial velocity of the expanding ring was measured with PDV and the arrangement was imaged using a high speed camera. Both the ring material and the material used as the explosive container were altered and the results compared with simulations performed in CTH. Good agreement was found between the simulations and the experiments. The maximum radial velocity attained was approximately 450 m/s, which was achieved through loading with a 5g PETN based charge.

  6. Explosive response model evaluation using the explosive H6

    NASA Astrophysics Data System (ADS)

    Sutherland, Gerrit T.; Burns, Joseph

    2000-04-01

    Reactive rate model parameters for a two term Lee Tarver [simplified ignition and growth (SIG)] model were obtained for the explosive H6 from modified gap test data. These model was used to perform simulations of the underwater sensitivity test (UST) using the CTH hydrocode. Reaction was predicted in the simulations for the same water gaps that reaction was observed in the UST. The expansions observed for the UST samples were not simulated correctly, and this is attributed to the density equilibrium conditions imposed between unreacted and reacted components in CTH for the Lee-Tarver model.

  7. Characterization of ambient particles size in workplace of manufacturing physical fitness equipments

    PubMed Central

    LIN, Chih-Chung; CHEN, Mei-Ru; CHANG, Sheng-Lang; LIAO, Wei-Heng; CHEN, Hsiu-Ling

    2014-01-01

    The manufacturing of fitness equipment involves several processes, including the cutting and punching of iron tubes followed by welding. Welding operations produce hazardous gases and particulate matter, which can enter the alveolar, resulting in adverse health effects. This study sought to verify the particle size distribution and exposure concentrations of atmospheric air samples in various work areas of a fitness equipment manufacturing industry. Observed particle concentrations are presented by area and in terms of relative magnitude: painting (15.58 mg/m3) > automatic welding (0.66 mg/m3) > manual welding (0.53 mg/m3) > punching (0.18 mg/m3) > cutting (0.16 mg/m3). The concentrations in each of the five work areas were Cinh>Cthor>Cresp. In all areas except the painting area, extra-fine particles produced by welding at high temperatures, and further those coagulated to form larger particles. This study observed bimodal distribution in the size of welding fume in the ranges of 0.7–1 µm and 15–21 µm. Meanwhile, the mass concentrations of particles with different sizes were not consistent across work areas. In the painting area, the mass concentration was higher in Chead>Cth>Calv, but in welding areas, it was found that Calv>Chead>Cth. Particles smaller than 1µm were primarily produced by welding. PMID:25327301

  8. The Voyager Anomaly and the GEM Theory

    NASA Astrophysics Data System (ADS)

    Brandenburg, J. E.

    For over a decade, the Pioneer Anomaly (PA) was an object of study and remains unresolved. Basically it is a sunward constant acceleration of the spacecraft that appeared unambiguously after the satellites passage beyond Saturn. It now appears possible the PA acceleration is the appearance of second, string-like, solution to the Einstein Equations first discussed in the context of charged finite mass charged particle potentials as part of the GEM theory. The exact solution to the metric equations is similar in form to the Schwartzchild Solution but with a positive sign: grr = (1 + rG/r)-1 where rG is a characteristic radius corresponding to the Schwartzchild radius. Adopting the approximation that for weak fields the metric becomes a Newtonian gravity potential: grr ≅-2ϕ, a string potential form is obtained in the limit grr ≅1-2ϕ, for r < < rG, grr≅r/rG (1-r/rG…). For the choice rG = cTH, this produces an effective gravity acceleration a ≅ c/TH = 8 x 10-10 m/sec2 in agreement with observations. The "turn on" for this potential apparently occurs with the encounter with Jupiter, which raised the spacecraft to above escape velocity. The possible physical meaning of this second metric appearance is found to be a gravitational form of Lenz's law, where objects departing from gravity potentials experience a resistance that keeps them bound at long distances.

  9. Genetic engineering of Clostridium thermocellum DSM1313 for enhanced ethanol production.

    PubMed

    Kannuchamy, Saranyah; Mukund, Nisha; Saleena, Lilly M

    2016-05-11

    The twin problem of shortage in fossil fuel and increase in environmental pollution can be partly addressed by blending of ethanol with transport fuel. Increasing the ethanol production for this purpose without affecting the food security of the countries would require the use of cellulosic plant materials as substrate. Clostridium thermocellum is an anaerobic thermophilic bacterium with cellulolytic property and the ability to produce ethanol. But its application as biocatalyst for ethanol production is limited because pyruvate ferredoxin oxidoreductase, which diverts pyruvate to ethanol production pathway, has low affinity to the substrate. Therefore, the present study was undertaken to genetically modify C. thermocellum for enhancing its ethanol production capacity by transferring pyruvate carboxylase (pdc) and alcohol dehydrogenase (adh) genes of the homoethanol pathway from Zymomonas mobilis. The pdc and adh genes from Z. mobilis were cloned in pNW33N, and transformed to Clostridium thermocellum DSM 1313 by electroporation to generate recombinant CTH-pdc, CTH-adh and CTH-pdc-adh strains that carried heterologous pdc, adh, and both genes, respectively. The plasmids were stably maintained in the recombinant strains. Though both pdc and adh were functional in C. thermocellum, the presence of adh severely limited the growth of the recombinant strains, irrespective of the presence or absence of the pdc gene. The recombinant CTH-pdc strain showed two-fold increase in pyruvate carboxylase activity and ethanol production when compared with the wild type strain. Pyruvate decarboxylase gene of the homoethanol pathway from Z mobilis was functional in recombinant C. thermocellum strain and enhanced its ability to produced ethanol. Strain improvement and bioprocess optimizations may further increase the ethanol production from this recombinant strain.

  10. The Effects of Capillary Transit Time Heterogeneity (CTH) on the Cerebral Uptake of Glucose and Glucose Analogs: Application to FDG and Comparison to Oxygen Uptake

    PubMed Central

    Angleys, Hugo; Jespersen, Sune N.; Østergaard, Leif

    2016-01-01

    Glucose is the brain's principal source of ATP, but the extent to which cerebral glucose consumption (CMRglc) is coupled with its oxygen consumption (CMRO2) remains unclear. Measurements of the brain's oxygen-glucose index OGI = CMRO2/CMRglc suggest that its oxygen uptake largely suffices for oxidative phosphorylation. Nevertheless, during functional activation and in some disease states, brain tissue seemingly produces lactate although cerebral blood flow (CBF) delivers sufficient oxygen, so-called aerobic glycolysis. OGI measurements, in turn, are method-dependent in that estimates based on glucose analog uptake depend on the so-called lumped constant (LC) to arrive at CMRglc. Capillary transit time heterogeneity (CTH), which is believed to change during functional activation and in some disease states, affects the extraction efficacy of oxygen from blood. We developed a three-compartment model of glucose extraction to examine whether CTH also affects glucose extraction into brain tissue. We then combined this model with our previous model of oxygen extraction to examine whether differential glucose and oxygen extraction might favor non-oxidative glucose metabolism under certain conditions. Our model predicts that glucose uptake is largely unaffected by changes in its plasma concentration, while changes in CBF and CTH affect glucose and oxygen uptake to different extents. Accordingly, functional hyperemia facilitates glucose uptake more than oxygen uptake, favoring aerobic glycolysis during enhanced energy demands. Applying our model to glucose analogs, we observe that LC depends on physiological state, with a risk of overestimating relative increases in CMRglc during functional activation by as much as 50%. PMID:27790110

  11. Transverse isotropic modeling of the ballistic response of glass reinforced plastic composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, P.A.

    1997-12-31

    The use of glass reinforced plastic (GRP) composites is gaining significant attention in the DoD community for use in armor applications. These materials typically possess a laminate structure consisting of up to 100 plies, each of which is constructed of a glass woven roving fabric that reinforces a plastic matrix material. Current DoD attention is focused on a high strength, S-2 glass cross-weave (0/90) fabric reinforcing a polyester matrix material that forms each ply of laminate structure consisting anywhere from 20 to 70 plies. The resulting structure displays a material anisotropy that is, to a reasonable approximation, transversely isotropic. Whenmore » subjected to impact and penetration from a metal fragment projectile, the GRP displays damage and failure in an anisotropic manner due to various mechanisms such as matrix cracking, fiber fracture and pull-out, and fiber-matrix debonding. In this presentation, the author will describe the modeling effort to simulate the ballistic response of the GRP material described above using the transversely isotropic (TI) constitutive model which has been implemented in the shock physics code, CTH. The results of this effort suggest that the model is able to describe the delamination behavior of the material but has some difficulty capturing the in-plane (i.e., transverse) response of the laminate due to its cross-weave fabric reinforcement pattern which causes a departure from transverse isotropy.« less

  12. Stress and temperature distributions of individual particles in a shock wave propagating through dry and wet sand mixtures

    NASA Astrophysics Data System (ADS)

    Schumaker, Merit G.; Kennedy, Gregory; Thadhani, Naresh; Hankin, Markos; Stewart, Sarah T.; Borg, John P.

    2017-01-01

    Determining stress and temperature distributions of dynamically compacted particles is of interest to the geophysical and astrological research communities. However, the researcher cannot easily observe particle interactions during a planar shock experiment. By using mesoscale simulations, we can unravel granular particle interactions. Unlike homogenous materials, the averaged Hugoniot state for heterogeneous granular materials differs from the individual stress and temperature states of particles during a shock event. From planar shock experiments for dry and water-saturated Oklahoma sand, we constructed simulations using Sandia National Laboratory code known as CTH and then compared these simulated results to the experimental results. This document compares and presents stress and temperature distributions from simulations, with a discussion on the difference between Hugoniot measurements and distribution peaks for dry and water-saturated sand.

  13. PSI-Center Validation Studies

    NASA Astrophysics Data System (ADS)

    Nelson, B. A.; Akcay, C.; Glasser, A. H.; Hansen, C. J.; Jarboe, T. R.; Marklin, G. J.; Milroy, R. D.; Morgan, K. D.; Norgaard, P. C.; Shumlak, U.; Sutherland, D. A.; Victor, B. S.; Sovinec, C. R.; O'Bryan, J. B.; Held, E. D.; Ji, J.-Y.; Lukin, V. S.

    2014-10-01

    The Plasma Science and Innovation Center (PSI-Center - http://www.psicenter.org) supports collaborating validation platform experiments with 3D extended MHD simulations using the NIMROD, HiFi, and PSI-TET codes. Collaborators include the Bellan Plasma Group (Caltech), CTH (Auburn U), HBT-EP (Columbia), HIT-SI (U Wash-UW), LTX (PPPL), MAST (Culham), Pegasus (U Wisc-Madison), SSX (Swarthmore College), TCSU (UW), and ZaP/ZaP-HD (UW). The PSI-Center is exploring application of validation metrics between experimental data and simulations results. Biorthogonal decomposition (BOD) is used to compare experiments with simulations. BOD separates data sets into spatial and temporal structures, giving greater weight to dominant structures. Several BOD metrics are being formulated with the goal of quantitive validation. Results from these simulation and validation studies, as well as an overview of the PSI-Center status will be presented.

  14. Comparing CTH simulations and experiments on explosively loaded rings

    NASA Astrophysics Data System (ADS)

    Braithwaite, C. H.; Aydelotte, Brady; Collins, Adam; Thadhani, Naresh; Williamson, David Martin

    2012-03-01

    A series of experiments were conducted on explosively loaded metallic rings for the purpose of studying fragmentation. In addition to the collection of fragments for analysis, the radial velocity of the expanding ring was measured with photon Doppler velocimetry (PDV) and the arrangement was imaged using high speed photography. Both the ring material and the material used as the explosive container were altered and the results compared with simulations performed in CTH. Good agreement was found between the simulations and the experiments. The maximum radial velocity attained was approximately 380 m/s, which was achieved through loading with a 5g PETN based charge.

  15. Gaming as a Therapeutic Tool in Adolescence. Experience of Institutional Therapy of CThA, UCL, Brussels, Belgium.

    PubMed

    Descamps, Guillaume; d'Alcantara, Ann

    2016-09-01

    This work presents the experience of an Emancipatory action research led at the Therapeutic Center for Adolescents (CThA) at Saint Luc's Clinics (UCL). This research focuses on the practice effects of "Pixels" and "Passerelle" workshops at CThA. It is about the use of video games as a therapeutic tool, mobilizing of the symptomatology of the teenager. "Pixels" workshops use playing according to three specific forms: the paper role-play game, the video game, and the cards playing game. Their specificity is that the participative adult shows a regressive ability strong enough to play with teenagers and is very careful to not interpret what takes place within. "Passerelle" workshops demonstrate the link between the teenager's mind and the use of his own virtual avatar. It allows to evolve from a "play together" to a "talk together", a moment of symbolization and of being able to stand back in regards to his or her own recreational activities. As a discussion, this clinical illustration of Karl recovering from depression and dependency. This setting for speech allowed him to evolve into an impulse mood and to reconnect emotionally.

  16. Power transfer systems for future navy helicopters. Final report 25 Jun 70--28 Jun 72

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bossler, R.B. Jr.

    1972-11-01

    The purpose of this program was to conduct an analysis of helicopter power transfer systems (pts), both conventional and advanced concept type, with the objective of reducing specific weights and improving reliability beyond present values. The analysis satisfied requirements specified for a 200,000 pound cargo transport helicopter (CTH), a 70,000 pound heavy assault helicopter, and a 15,000 pound non-combat search and rescue helicopter. Four selected gearing systems (out of seven studied), optimized for lightest weight and equal reliability for the CTH, using component proportioning via stress and stiffness equations, had no significant difference between their aircraft payloads. All optimized ptsmore » were approximately 70% of statistically predicted weight. Reliability increase is predicted via gearbox derating using Weibull relationships. Among advanced concepts, the Turbine Integrated Geared Rotor was competitive for weight, technology availability and reliability increase but handicapped by a special engine requirement. The warm cycle system was found not competitive. Helicopter parametric weight analysis is shown. Advanced development Plans are presented for the pts for the CTH, including total pts system, selected pts components, and scale model flight testing in a Kaman HH2 helicopter.« less

  17. The E. coli thioredoxin folding mechanism: the key role of the C-terminal helix.

    PubMed

    Vazquez, Diego S; Sánchez, Ignacio E; Garrote, Ana; Sica, Mauricio P; Santos, Javier

    2015-02-01

    In this work, the unfolding mechanism of oxidized Escherichia coli thioredoxin (EcTRX) was investigated experimentally and computationally. We characterized seven point mutants distributed along the C-terminal α-helix (CTH) and the preceding loop. The mutations destabilized the protein against global unfolding while leaving the native structure unchanged. Global analysis of the unfolding kinetics of all variants revealed a linear unfolding route with a high-energy on-pathway intermediate state flanked by two transition state ensembles TSE1 and TSE2. The experiments show that CTH is mainly unfolded in TSE1 and the intermediate and becomes structured in TSE2. Structure-based molecular dynamics are in agreement with these experiments and provide protein-wide structural information on transient states. In our model, EcTRX folding starts with structure formation in the β-sheet, while the protein helices coalesce later. As a whole, our results indicate that the CTH is a critical module in the folding process, restraining a heterogeneous intermediate ensemble into a biologically active native state and providing the native protein with thermodynamic and kinetic stability. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. High resolution simulations of energy absorption in dynamically loaded cellular structures

    NASA Astrophysics Data System (ADS)

    Winter, R. E.; Cotton, M.; Harris, E. J.; Eakins, D. E.; McShane, G.

    2017-03-01

    Cellular materials have potential application as absorbers of energy generated by high velocity impact. CTH, a Sandia National Laboratories Code which allows very severe strains to be simulated, has been used to perform very high resolution simulations showing the dynamic crushing of a series of two-dimensional, stainless steel metal structures with varying architectures. The structures are positioned to provide a cushion between a solid stainless steel flyer plate with velocities ranging from 300 to 900 m/s, and an initially stationary stainless steel target. Each of the alternative architectures under consideration was formed by an array of identical cells each of which had a constant volume and a constant density. The resolution of the simulations was maximised by choosing a configuration in which one-dimensional conditions persisted for the full period over which the specimen densified, a condition which is most readily met by impacting high density specimens at high velocity. It was found that the total plastic flow and, therefore, the irreversible energy dissipated in the fully densified energy absorbing cell, increase (a) as the structure becomes more rodlike and less platelike and (b) as the impact velocity increases. Sequential CTH images of the deformation processes show that the flow of the cell material may be broadly divided into macroscopic flow perpendicular to the compression direction and jetting-type processes (microkinetic flow) which tend to predominate in rod and rodlike configurations and also tend to play an increasing role at increased strain rates. A very simple analysis of a configuration in which a solid flyer impacts a solid target provides a baseline against which to compare and explain features seen in the simulations. The work provides a basis for the development of energy absorbing structures for application in the 200-1000 m/s impact regime.

  19. Simulation of blast-induced early-time intracranial wave physics leading to traumatic brain injury.

    PubMed

    Taylor, Paul A; Ford, Corey C

    2009-06-01

    The objective of this modeling and simulation study was to establish the role of stress wave interactions in the genesis of traumatic brain injury (TBI) from exposure to explosive blast. A high resolution (1 mm3 voxels) five material model of the human head was created by segmentation of color cryosections from the Visible Human Female data set. Tissue material properties were assigned from literature values. The model was inserted into the shock physics wave code, CTH, and subjected to a simulated blast wave of 1.3 MPa (13 bars) peak pressure from anterior, posterior, and lateral directions. Three-dimensional plots of maximum pressure, volumetric tension, and deviatoric (shear) stress demonstrated significant differences related to the incident blast geometry. In particular, the calculations revealed focal brain regions of elevated pressure and deviatoric stress within the first 2 ms of blast exposure. Calculated maximum levels of 15 KPa deviatoric, 3.3 MPa pressure, and 0.8 MPa volumetric tension were observed before the onset of significant head accelerations. Over a 2 ms time course, the head model moved only 1 mm in response to the blast loading. Doubling the blast strength changed the resulting intracranial stress magnitudes but not their distribution. We conclude that stress localization, due to early-time wave interactions, may contribute to the development of multifocal axonal injury underlying TBI. We propose that a contribution to traumatic brain injury from blast exposure, and most likely blunt impact, can occur on a time scale shorter than previous model predictions and before the onset of linear or rotational accelerations traditionally associated with the development of TBI.

  20. Modeling and simulation of blast-induced, early-time intracranial wave physics leading to traumatic brain injury.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ford, Corey C.; Taylor, Paul Allen

    The objective of this modeling and simulation study was to establish the role of stress wave interactions in the genesis of traumatic brain injury (TBI) from exposure to explosive blast. A high resolution (1 mm{sup 3} voxels), 5 material model of the human head was created by segmentation of color cryosections from the Visible Human Female dataset. Tissue material properties were assigned from literature values. The model was inserted into the shock physics wave code, CTH, and subjected to a simulated blast wave of 1.3 MPa (13 bars) peak pressure from anterior, posterior and lateral directions. Three dimensional plots ofmore » maximum pressure, volumetric tension, and deviatoric (shear) stress demonstrated significant differences related to the incident blast geometry. In particular, the calculations revealed focal brain regions of elevated pressure and deviatoric (shear) stress within the first 2 milliseconds of blast exposure. Calculated maximum levels of 15 KPa deviatoric, 3.3 MPa pressure, and 0.8 MPa volumetric tension were observed before the onset of significant head accelerations. Over a 2 msec time course, the head model moved only 1 mm in response to the blast loading. Doubling the blast strength changed the resulting intracranial stress magnitudes but not their distribution. We conclude that stress localization, due to early time wave interactions, may contribute to the development of multifocal axonal injury underlying TBI. We propose that a contribution to traumatic brain injury from blast exposure, and most likely blunt impact, can occur on a time scale shorter than previous model predictions and before the onset of linear or rotational accelerations traditionally associated with the development of TBI.« less

  1. Coupling Between CTH and LS-DYNA for Thermal Postprocessing: Application to Propellant Cookoff From a Residual Penetrator

    DTIC Science & Technology

    2006-09-01

    1 1 θα θ ∇= ∂ ∂ t ; 1Ω∈x , 0>t (7) and 2 2 2 2 θα θ ∇= ∂ ∂ t ; 2Ω∈x , 0>t , (8) 12 in which α1 and α2 are the thermal diffusivities of steel and...M30A1, respectively. These are defined by p11 1 1 cρ κ α = (9) and p22 2 2 cρ κ α = . (10) Here, κ1 ad κ2 are the thermal conductivities...Coupling Between CTH and LS-DYNA for Thermal Postprocessing: Application to Propellant Cookoff From a Residual Penetrator by Martin N

  2. Diversity of layer 5 projection neurons in the mouse motor cortex

    PubMed Central

    Oswald, Manfred J.; Tantirigama, Malinda L. S.; Sonntag, Ivo; Hughes, Stephanie M.; Empson, Ruth M.

    2013-01-01

    In the primary motor cortex (M1), layer 5 projection neurons signal directly to distant motor structures to drive movement. Despite their pivotal position and acknowledged diversity these neurons are traditionally separated into broad commissural and corticofugal types, and until now no attempt has been made at resolving the basis for their diversity. We therefore probed the electrophysiological and morphological properties of retrogradely labeled M1 corticospinal (CSp), corticothalamic (CTh), and commissural projecting corticostriatal (CStr) and corticocortical (CC) neurons. An unsupervised cluster analysis established at least four phenotypes with additional differences between lumbar and cervical projecting CSp neurons. Distinguishing parameters included the action potential (AP) waveform, firing behavior, the hyperpolarisation-activated sag potential, sublayer position, and soma and dendrite size. CTh neurons differed from CSp neurons in showing spike frequency acceleration and a greater sag potential. CStr neurons had the lowest AP amplitude and maximum rise rate of all neurons. Temperature influenced spike train behavior in corticofugal neurons. At 26°C CTh neurons fired bursts of APs more often than CSp neurons, but at 36°C both groups fired regular APs. Our findings provide reliable phenotypic fingerprints to identify distinct M1 projection neuron classes as a tool to understand their unique contributions to motor function. PMID:24137110

  3. Balancing public health, trade and intellectual monopoly privileges: recent Australian IP legislation and the TPPA.

    PubMed

    Vines, Tim; Crow, Kim; Faunce, Thomas

    2012-12-01

    Over the past year, several significant reforms to Australia's intellectual property regime have been proposed and passed by Parliament. The Intellectual Property Laws Amendment (Raising the Bar) Act 2012 (Cth) made various improvements to Australian patent law, including an improved threshold for patentability, greater clarity around "usefulness" requirements, and the introduction of an experimental use exemption from infringement. Another Bill, the Intellectual Property Laws Amendment Bill 2012 (Cth), currently out for public consultation, would implement a 2003 decision of the World Trade Organisation (WTO) General Council and the 2005 Doha Declaration on the TRIPS Agreement and Public Health (Doha Declaration). If enacted, this Bill would facilitate equitable access to essential medicines by amending the compulsory licensing regime set out in the Patents Act 1990 (Cth). The underlying intention of this Bill--meeting public health goals outlined in the 2005 Doha Declaration--stands in juxtaposition to proposed reforms to intellectual property standards pursuant to the Trans-Pacific Partnership Trade and Investment Agreement (TPPA) that Australia is involved in. Although at a preliminary stage, leaked drafts of relevant intellectual property provisions in the TPPA suggest a privileging of patent monopoly privileges over public health goals. This column weighs the sentiments of the proposed Bill against those of the proposed provisions in the TPPA.

  4. Analysis of superimposed ultrasonic guided waves in long bones by the joint approximate diagonalization of eigen-matrices algorithm.

    PubMed

    Song, Xiaojun; Ta, Dean; Wang, Weiqi

    2011-10-01

    The parameters of ultrasonic guided waves (GWs) are very sensitive to mechanical and structural changes in long cortical bones. However, it is a challenge to obtain the group velocity and other parameters of GWs because of the presence of mixed multiple modes. This paper proposes a blind identification algorithm using the joint approximate diagonalization of eigen-matrices (JADE) and applies it to the separation of superimposed GWs in long bones. For the simulation case, the velocity of the single mode was calculated after separation. A strong agreement was obtained between the estimated velocity and the theoretical expectation. For the experiments in bovine long bones, by using the calculated velocity and a theoretical model, the cortical thickness (CTh) was obtained. For comparison with the JADE approach, an adaptive Gaussian chirplet time-frequency (ACGTF) method was also used to estimate the CTh. The results showed that the mean error of the CTh acquired by the JADE approach was 4.3%, which was smaller than that of the ACGTF method (13.6%). This suggested that the JADE algorithm may be used to separate the superimposed GWs and that the JADE algorithm could potentially be used to evaluate long bones. Copyright © 2011 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  5. Impact Cratering Calculations

    NASA Technical Reports Server (NTRS)

    Ahrens, Thomas J.

    2001-01-01

    We examined the von Mises and Mohr-Coulomb strength models with and without damage effects and developed a model for dilatancy. The models and results are given in O'Keefe et al. We found that by incorporating damage into the models that we could in a single integrated impact calculation, starting with the bolide in the atmosphere produce final crater profiles having the major features found in the field measurements. These features included a central uplift, an inner ring, circular terracing and faulting. This was accomplished with undamaged surface strengths of approximately 0.1 GPa and at depth strengths of approximately 1.0 GPa. We modeled the damage in geologic materials using a phenomenological approach, which coupled the Johnson-Cook damage model with the CTH code geologic strength model. The objective here was not to determine the distribution of fragment sizes, but rather to determine the effect of brecciated and comminuted material on the crater evolution, fault production, ejecta distribution, and final crater morphology.

  6. Shock wave interaction with L-shaped structures

    NASA Astrophysics Data System (ADS)

    Miller, Richard C.

    1993-12-01

    This study investigated the interaction of shock waves with L-shaped structures using the CTH hydrodynamics code developed by Sandia National Laboratories. Computer models of shock waves traveling through air were developed using techniques similar to shock tube experiments. Models of L-shaped buildings were used to determine overpressures achieved by the reflecting shock versus angle of incidence of the shock front. An L-shaped building model rotated 45 degrees to the planar shock front produced the highest reflected overpressure of 9.73 atmospheres in the corner joining the two wings, a value 9.5 times the incident overpressure of 1.02 atmospheres. The same L-shaped building was modeled with the two wings separated by 4.24 meters to simulate an open courtyard. This open area provided a relief path for the incident shock wave, creating a peak overpressure of only 4.86 atmospheres on the building's wall surfaces from the same 1.02 atmosphere overpressure incident shock wave.

  7. PSI-Center Simulations of Validation Platform Experiments

    NASA Astrophysics Data System (ADS)

    Nelson, B. A.; Akcay, C.; Glasser, A. H.; Hansen, C. J.; Jarboe, T. R.; Marklin, G. J.; Milroy, R. D.; Morgan, K. D.; Norgaard, P. C.; Shumlak, U.; Victor, B. S.; Sovinec, C. R.; O'Bryan, J. B.; Held, E. D.; Ji, J.-Y.; Lukin, V. S.

    2013-10-01

    The Plasma Science and Innovation Center (PSI-Center - http://www.psicenter.org) supports collaborating validation platform experiments with extended MHD simulations. Collaborators include the Bellan Plasma Group (Caltech), CTH (Auburn U), FRX-L (Los Alamos National Laboratory), HIT-SI (U Wash - UW), LTX (PPPL), MAST (Culham), Pegasus (U Wisc-Madison), PHD/ELF (UW/MSNW), SSX (Swarthmore College), TCSU (UW), and ZaP/ZaP-HD (UW). Modifications have been made to the NIMROD, HiFi, and PSI-Tet codes to specifically model these experiments, including mesh generation/refinement, non-local closures, appropriate boundary conditions (external fields, insulating BCs, etc.), and kinetic and neutral particle interactions. The PSI-Center is exploring application of validation metrics between experimental data and simulations results. Biorthogonal decomposition is proving to be a powerful method to compare global temporal and spatial structures for validation. Results from these simulation and validation studies, as well as an overview of the PSI-Center status will be presented.

  8. Freedom of information applications as an "evergreening" tactic: Secretary, Department of Health and Ageing v iNOVA Pharmaceuticals (Australia) Pty Ltd (2010) 191 FCR 573; [2010] FCA 1442.

    PubMed

    Vines, Tim; Faunce, Thomas

    2011-09-01

    A recent decision of the Federal Court of Australia illustrates how patent-holding pharmaceutical companies are attempting to use Australia's Freedom of Information Act 1982 (Cth) to force Australian safety, quality and efficacy regulators to disclose whether generic competitors are attempting to enter the market. In Secretary, Department of Health and Ageing v iNova Pharmaceuticals (Australia) Pty Ltd (2010) 191 FCR 573; [2010] FCA 1442 a single judge of the Federal Court overturned a decision of the Administrative Appeals Tribunal (AAT) that would have compelled the Australian Therapeutic Goods Administration (TGA) to reveal whether they were in possession of an application to register generic versions of two iNova products: imiquimod and phentermine. In its justification to the AAT for refusing to confirm or deny the existence of any application, the TGA argued that to reveal the existence of such a document would prejudice the proper administration of the National Health Act 1953 (Cth) as it could compromise the listing of a generic on the Pharmaceutical Benefits Scheme. The AAT failed to appreciate the extent to which this revelation to a competitor would have undercut 2004 amendments to the Therapeutic Goods Act 1989 (Cth) that provided penalties for evergreening tactics involving TGA notifications to drug patent-holders and 2006 amendments to the Patents Act 1990 (Cth) which protected the right of generic manufacturers to "springboard". The decision of the Federal Court is one of the first to explore the use of freedom of information legislation by patent-holders as a potential "evergreening" technique to prolong royalties by marginalising generic competition. Because of the significant amounts of money involved in ensuring rapid market entry of low-cost generic products, the issue has considerable public health significance.

  9. Fatigue in people with localized colorectal cancer who do and do not receive chemotherapy: a longitudinal prospective study

    PubMed Central

    Vardy, J. L.; Dhillon, H. M.; Pond, G. R.; Renton, C.; Dodd, A.; Zhang, H.; Clarke, S. J.; Tannock, I. F.

    2016-01-01

    Background Fatigue is associated with cancer and chemotherapy and may be sustained. Here, we describe a prospective longitudinal study evaluating fatigue and putative mechanisms in people with colorectal cancer (CRC). Patients and methods People with localized CRC completed the Functional Assessment of Cancer Treatment-Fatigue (FACT-F) questionnaire at baseline (before chemotherapy, if given), 6, 12, and 24 months. Healthy controls (HCs) were assessed at the first three time points. Fatigue was defined by standardized FACT-F scores ≤68/100. Quality-of-life (QoL, assessed by the FACT-G questionnaire), affective, and cognitive symptoms were evaluated. Associations were sought between fatigue, baseline factors, and blood tests (including hemoglobin, cytokines, and sex hormones). Regression analyses, Fisher's exact tests, and Wilcoxon rank-sum tests assessed levels of fatigue at each time point and change in fatigue from baseline. A repeated-measures analysis investigated prognostic factors of fatigue across all time points. Results A total of 289 subjects with localized CRC (173 received chemotherapy) and 72 HCs were assessed. More CRC patients had fatigue than HCs at baseline (52% versus 26%, P < 0.001). Fatigue was increased in the chemotherapy (CTh) group at 6 months [CTh+ 70% versus CTh− 31% (P < 0.001), HCs 22%] and remained more common at 12 [CTh+ 44% versus CTh− 31% (P = 0.079)] and 24 months [CTh+ 39% versus CTh− 24% (P = 0.047)]. There was no significant difference between those not receiving chemotherapy and HCs at follow-up assessments. Fatigue was associated with poor QoL, affective and cognitive symptoms, but not consistently with cytokine levels. Predictors for sustained fatigue were baseline fatigue, treatment group, cognitive and affective symptoms, poorer QoL, and comorbidities. Conclusions CRC patients have more fatigue than HCs at baseline. Fatigue peaks immediately after adjuvant chemotherapy, but remains common for 2 years in those who receive chemotherapy. Cognitive and affective symptoms, QoL, comorbidities, chemotherapy, and baseline fatigue predict for longer term fatigue. PMID:27443634

  10. The ethics of clinical research and the conduct of clinical drug trials: international comparisons and codes of conduct.

    PubMed

    Beran, R G

    2000-01-01

    Human research must respect most rigorous ethical standards to protect both the investigators and subjects. Codes of ethical practice relevant to such research are subjected to reviews around the world including The European Union (EU), the Canadian Tri-Council Policy Statement (including the Medical Research Council, the Natural Sciences and Engineering Research Council and the Social Sciences and Humanities Research Council), the Finnish Parliament Research Act (April 1999) and the National Statement on Ethical Conduct in Research Involving Humans in accordance with the NHMRC Act 1992 (Cth) from the National Health and Medical Research Council of Australia. The Australian Statement was endorsed by the Australian Vice-Chancellors' Committee, the Australian Research Council, the Australian Academy of the Humanities, the Australian Academy of Science and the Academy of Social Sciences in Australia and supported by the Academy of Technological Sciences and Engineering. This reflects the extensive ramifications of human experimentation and the range of stack holders. Private organisations have also produced interpretations of minimum standards of good clinical practice. The paper that follows analyses approaches to human experimentation and the minimal ethical expectations in the conduct of such research.

  11. Science verification of operational aerosol and cloud products for TROPOMI on Sentinel-5 precursor

    NASA Astrophysics Data System (ADS)

    Lelli, Luca; Gimeno-Garcia, Sebastian; Sanders, Abram; Sneep, Maarten; Rozanov, Vladimir V.; Kokhanvosky, Alexander A.; Loyola, Diego; Burrows, John P.

    2016-04-01

    With the approaching launch of the Sentinel-5 precursor (S-5P) satellite, scheduled by mid 2016, one preparatory task of the L2 working group (composed by the Institute of Environmental Physics IUP Bremen, the Royal Netherlands Meteorological Institute KNMI De Bilt, and the German Aerospace Center DLR Oberpfaffenhofen) has been the assessment of biases among aerosol and cloud products, that are going to be inferred by the respective algorithms from measurements of the platform's payload TROPOspheric Monitoring Instrument (TROPOMI). The instrument will measure terrestrial radiance with varying moderate spectral resolutions from the ultraviolet throughout the shortwave infrared. Specifically, all the operational and verification algorithms involved in this comparison exploit the sensitivity of molecular oxygen absorption (the A-band, 755-775 nm, with a resolution of 0.54 nm) to changes in optical and geometrical parameters of tropospheric scattering layers. Therefore, aerosol layer height (ALH) and thickness (AOT), cloud top height (CTH), thickness (COT) and albedo (CA) are the targeted properties. First, the verification of these properties has been accomplished upon synchronisation of the respective forward radiative transfer models for a variety of atmospheric scenarios. Then, biases against independent techniques have been evaluated with real measurements of selected GOME-2 orbits. Global seasonal bias assessment has been carried out for CTH, CA and COT, whereas the verification of ALH and AOT is based on the analysis of the ash plume emitted by the icelandic volcanic eruption Eyjafjallajökull in May 2010 and selected dust scenes off the Saharan west coast sensed by SCIAMACHY in year 2009.

  12. Problems of Providing Joint Operation of Radio-Electronic Equipment (Selected Portions) (Problemy Obespecheniya Sovmestnoy Raboty Radioelektronnoy Apparatury)

    DTIC Science & Technology

    1989-03-17

    Li q V Ch , ch H U I, i W W l w Sh, sh .R a Y, y L U4 LU W1 Shch, shch K K K ) K, k b b it 7 - I a L. 1 b y, y uM M Al M, m b HH H N N, n 33 9 1 E, e 0...y* or *. RUSSIAN AND ENGLISH TRIGONOMETRIC FUNCTIONS Russian English Russian English Russian English sin sin sh sinh arc sh sinh-1 cos cos ch cosh arc... ch cosh -1 tg tan th tanh arc th tanh-1 ctg cot cth coth arc cth coth -1 sec sec sch sech arc sch sech -1 cosec csc csch csch arc csch csch - Russian

  13. Mesoscale Computational Investigation of Shocked Heterogeneous Materials with Application to Large Impact Craters

    NASA Technical Reports Server (NTRS)

    Crawford, D. A.; Barnouin-Jha, O. S.; Cintala, M. J.

    2003-01-01

    The propagation of shock waves through target materials is strongly influenced by the presence of small-scale structure, fractures, physical and chemical heterogeneities. Pre-existing fractures often create craters that appear square in outline (e.g. Meteor Crater). Reverberations behind the shock from the presence of physical heterogeneity have been proposed as a mechanism for transient weakening of target materials. Pre-existing fractures can also affect melt generation. In this study, we are attempting to bridge the gap in numerical modeling between the micro-scale and the continuum, the so-called meso-scale. To accomplish this, we are developing a methodology to be used in the shock physics hydrocode (CTH) using Monte-Carlo-type methods to investigate the shock properties of heterogeneous materials. By comparing the results of numerical experiments at the micro-scale with experimental results and by using statistical techniques to evaluate the performance of simple constitutive models, we hope to embed the effect of physical heterogeneity into the field variables (pressure, stress, density, velocity) allowing us to directly imprint the effects of micro-scale heterogeneity at the continuum level without incurring high computational cost.

  14. Numerical Simulation Of Cratering Effects In Adobe

    DTIC Science & Technology

    2013-07-01

    DEVELOPMENT OF MATERIAL PARAMETERS .........................................................7 PROBLEM SETUP...37 PARAMETER ADJUSTMENTS ......................................................................................38 GLOSSARY...dependent yield surface with the Geological Yield Surface (GEO) modeled in CTH using well characterized adobe. By identifying key parameters that

  15. 2.097μ Cth:YAG flashlamp pumped high energy high efficiency laser operation (patent pending)

    NASA Astrophysics Data System (ADS)

    Bar-Joseph, Dan

    2018-02-01

    Flashlamp pumped Cth:YAG lasers are mainly used in medical applications (urology). The main laser transition is at 2.13μ and is called a quasi-three level having an emission cross-section of 7x10-21 cm2 and a ground state absorption of approximately 5%/cm. Because of the relatively low absorption, combined with a modest emission cross-section, the laser requires high reflectivity output coupling, and therefore high intra-cavity energy density which limits the output to approximately 4J/pulse for reliable operation. This paper will describe a method of efficiently generating high output energy at low intra-cavity energy density by using an alternative 2.097μ transition having an emission cross-section of 5x10-21 cm2 and a ground level absorption of approximately 14%/cm.

  16. Structures of bacterial polynucleotide kinase in a Michaelis complex with GTP•Mg2+ and 5'-OH oligonucleotide and a product complex with GDP•Mg2+ and 5'-PO4 oligonucleotide reveal a mechanism of general acid-base catalysis and the determinants of phosphoacceptor recognition.

    PubMed

    Das, Ushati; Wang, Li Kai; Smith, Paul; Jacewicz, Agata; Shuman, Stewart

    2014-01-01

    Clostridium thermocellum polynucleotide kinase (CthPnk), the 5' end-healing module of a bacterial RNA repair system, catalyzes reversible phosphoryl transfer from an NTP donor to a 5'-OH polynucleotide acceptor. Here we report the crystal structures of CthPnk-D38N in a Michaelis complex with GTP•Mg(2+) and a 5'-OH oligonucleotide and a product complex with GDP•Mg(2+) and a 5'-PO4 oligonucleotide. The O5' nucleophile is situated 3.0 Å from the GTP γ phosphorus in the Michaelis complex, where it is coordinated by Asn38 and is apical to the bridging β phosphate oxygen of the GDP leaving group. In the product complex, the transferred phosphate has undergone stereochemical inversion and Asn38 coordinates the 5'-bridging phosphate oxygen of the oligonucleotide. The D38N enzyme is poised for catalysis, but cannot execute because it lacks Asp38-hereby implicated as the essential general base catalyst that abstracts a proton from the 5'-OH during the kinase reaction. Asp38 serves as a general acid catalyst during the 'reverse kinase' reaction by donating a proton to the O5' leaving group of the 5'-PO4 strand. The acceptor strand binding mode of CthPnk is distinct from that of bacteriophage T4 Pnk.

  17. Cloud Photogrammetry from Space

    NASA Astrophysics Data System (ADS)

    Zaksek, K.; Gerst, A.; von der Lieth, J.; Ganci, G.; Hort, M.

    2015-04-01

    The most commonly used method for satellite cloud top height (CTH) compares brightness temperature of the cloud with the atmospheric temperature profile. Because of the uncertainties of this method, we propose a photogrammetric approach. As clouds can move with high velocities, even instruments with multiple cameras are not appropriate for accurate CTH estimation. Here we present two solutions. The first is based on the parallax between data retrieved from geostationary (SEVIRI, HRV band; 1000 m spatial resolution) and polar orbiting satellites (MODIS, band 1; 250 m spatial resolution). The procedure works well if the data from both satellites are retrieved nearly simultaneously. However, MODIS does not retrieve the data at exactly the same time as SEVIRI. To compensate for advection in the atmosphere we use two sequential SEVIRI images (one before and one after the MODIS retrieval) and interpolate the cloud position from SEVIRI data to the time of MODIS retrieval. CTH is then estimated by intersection of corresponding lines-of-view from MODIS and interpolated SEVIRI data. The second method is based on NASA program Crew Earth observations from the International Space Station (ISS). The ISS has a lower orbit than most operational satellites, resulting in a shorter minimal time between two images, which is needed to produce a suitable parallax. In addition, images made by the ISS crew are taken by a full frame sensor and not a push broom scanner that most operational satellites use. Such data make it possible to observe also short time evolution of clouds.

  18. Constitutive Modeling of the Dynamic-Tensile-Extrusion Test of PTFE

    NASA Astrophysics Data System (ADS)

    Resnyansky, Anatoly; Brown, Eric; Trujillo, Carl; Gray, George

    2015-06-01

    Use of polymers in the defence, aerospace and industrial application at extreme conditions makes prediction of behaviour of these materials very important. Crucial to this is knowledge of the physical damage response in association with the phase transformations during the loading and the ability to predict this via multi-phase simulation taking the thermodynamical non-equilibrium and strain rate sensitivity into account. The current work analyses Dynamic-Tensile-Extrusion (DTE) experiments on polytetrafluoroethylene (PTFE). In particular, the phase transition during the loading with subsequent tension are analysed using a two-phase rate sensitive material model implemented in the CTH hydrocode and the calculations are compared with experimental high-speed photography. The damage patterns and their link with the change of loading modes are analysed numerically and are correlated to the test observations.

  19. Comet Shoemaker-Levy 9 Fragment Size Estimates: How Big was the Parent Body?

    NASA Technical Reports Server (NTRS)

    Crawford, David A.

    1997-01-01

    The impact of Comet Shoemaker-Levy 9 on Jupiter in July, 1994 was the largest, most energetic impact event on a planet ever witnessed. Because it broke up during a close encounter with Jupiter in 1992, it was bright enough to be discovered more than a year prior to impact, allowing the scientific community an unprecedented opportunity to assess the effects such an event would have. Many excellent observations were made from Earth-based telescopes, the Hubble Space Telescope (HST), and the Galileo spacecraft en route to Jupiter. In this paper, these observations are used in conjunction with computational simulations performed with the CTH shock-physics hydrocode to determine the sizes of the fifteen fragments that made discernible impact features on the planet. To do this, CTH was equipped with a radiative ablation model and a postprocessing radiative ray-trace capability that enabled light-flux predictions (often called the impact flash) for the viewing geometries of Galileo and ground-based observers. The five events recorded by Galileo were calibrated to give fragment size estimates. Compared against ground-based and HST observations, these estimates were extended using a least-squares analysis to assess the impacts of the remaining ten fragments. Some of the largest impacts (L, G, and K) were greater that 1 km in diameter, but the density of the fragments was low, about 0.25 g/cm(exp 3). The volume of the combined fifteen fragments would make a sphere 1.8 km in diameter. Assuming a prebreakup density of 0.5 g/cm(exp 3), the parent body of Shoemaker-Levy 9 had a probable diameter of 1.4 km. The total kinetic energy of all the impacts was equivalent to the explosive yield of 300 Gigatons of TNT.

  20. Damage healing ability of a shape-memory-polymer-based particulate composite with small thermoplastic contents

    NASA Astrophysics Data System (ADS)

    Nji, Jones; Li, Guoqiang

    2012-02-01

    The purpose of this study is to investigate the potential of a shape-memory-polymer (SMP)-based particulate composite to heal structural-length scale damage with small thermoplastic additive contents through a close-then-heal (CTH) self-healing scheme that was introduced in a previous study (Li and Uppu 2010 Comput. Sci. Technol. 70 1419-27). The idea is to achieve reasonable healing efficiencies with minimal sacrifice in structural load capacity. By first closing cracks, the gap between two crack surfaces is narrowed and a lesser amount of thermoplastic particles is required to achieve healing. The particulate composite was fabricated by dispersing copolyester thermoplastic particles in a shape memory polymer matrix. It is found that, for small thermoplastic contents of less than 10%, the CTH scheme followed in this study heals structural-length scale damage in the SMP particulate composite to a meaningful extent and with less sacrifice of structural capacity.

  1. Gray matter trophism, cognitive impairment, and depression in patients with multiple sclerosis.

    PubMed

    Pravatà, Emanuele; Rocca, Maria A; Valsasina, Paola; Riccitelli, Gianna C; Gobbi, Claudio; Comi, Giancarlo; Falini, Andrea; Filippi, Massimo

    2017-12-01

    Cognitive impairment and depression frequently affects patients with multiple sclerosis (MS). However, the relationship between the occurrence of depression and cognitive impairment and the development of cortical atrophy has not been fully elucidated yet. To investigate the association of cortical and deep gray matter (GM) volume with depression and cognitive impairment in MS. Three-dimensional (3D) T1-weighted scans were obtained from 126 MS patients and 59 matched healthy controls. Cognitive impairment was assessed using the Brief Repeatable Battery of Neuropsychological Tests and depression with the Montgomery-Asberg Depression Rating Scale (MADRS). Using FreeSurfer and FIRST software, we assessed cortical thickness (CTh) and deep GM volumetry. Magnetic resonance imaging (MRI) variables explaining depression and cognitive impairment were investigated using factorial and classification analysis. Multivariate regression models correlated GM abnormalities with symptoms severity. Compared with controls, MS patients exhibited widespread bilateral cortical thinning involving all brain lobes. Depressed MS showed selective CTh decrease in fronto-temporal regions, whereas cognitive impairment MS exhibited widespread fronto-parietal cortical and subcortical GM atrophy. Frontal cortical thinning was the best predictor of depression ( C-statistic = 0.7), whereas thinning of the right precuneus and high T2 lesion volume best predicted cognitive impairment ( C-statistic = 0.8). MADRS severity correlated with right entorhinal cortex thinning, whereas cognitive impairment severity correlated with left entorhinal and thalamus atrophy. MS-related depression is linked to circumscribed CTh changes in areas deputed to emotional behavior, whereas cognitive impairment is correlated with cortical and subcortical GM atrophy of circuits involved in cognition.

  2. Color centers inside crystallic active media

    NASA Astrophysics Data System (ADS)

    Mierczyk, Zygmunt; Kaczmarek, Slawomir M.; Kopczynski, Krzysztof

    1995-03-01

    This paper presents research results on color centers induced by radiation of a xenon lamp in non doped crystals of yttrium aluminum garnet Y3Al5O12 (YAG), strontium- lanthanum aluminate SrLaAlO4 (SLAO), strontium-lanthanum gallate SrLaGa3O7 (SLGO), and in doped crystals: Nd:YAG, Cr, Tm, Ho:YAG (CTH:YAG), Nd:SLAO and Nd:SLGO. In all these investigated crystals under the influence of intensive exposure by xenon lamp radiation additional bands connected with centers O-2, O2 and centers F came up near the short-wave absorption edge. In the case of doped crystals the observed processes are much more complicated. In crystals CTH:YAG the greatest perturbations in relation to basic state are present at the short-wave absorption edge, as well as on areas of absorption bands of ions Cr+3 and Tm+3 conditioning the sensibilization process of ions Ho+3. These spectral structure disturbances essentially influence the efficiency of this process, as proven during generating investigations. In the case of SrLaGa3O7:Nd+3 under the influence of exposure substantial changes of absorption spectrum occurred on spectral areas 346 divided by 368 nm, 429 divided by 441 nm and 450 divided by 490 nm. Those changes have an irreversible character. They disappear not before the plate is being held at oxidizing atmosphere. Investigations of laser rods Nd:SLGO, CTH:YAG, and Nd:YAG in a free generation demonstrated that the color centers of these crystals are induced by pomp radiation from the spectral area up to 450 nm.

  3. Local search to improve coordinate-based task mapping

    DOE PAGES

    Balzuweit, Evan; Bunde, David P.; Leung, Vitus J.; ...

    2015-10-31

    We present a local search strategy to improve the coordinate-based mapping of a parallel job’s tasks to the MPI ranks of its parallel allocation in order to reduce network congestion and the job’s communication time. The goal is to reduce the number of network hops between communicating pairs of ranks. Our target is applications with a nearest-neighbor stencil communication pattern running on mesh systems with non-contiguous processor allocation, such as Cray XE and XK Systems. Utilizing the miniGhost mini-app, which models the shock physics application CTH, we demonstrate that our strategy reduces application running time while also reducing the runtimemore » variability. Furthermore, we further show that mapping quality can vary based on the selected allocation algorithm, even between allocation algorithms of similar apparent quality.« less

  4. Global cloud top height retrieval using SCIAMACHY limb spectra: model studies and first results

    NASA Astrophysics Data System (ADS)

    Eichmann, Kai-Uwe; Lelli, Luca; von Savigny, Christian; Sembhi, Harjinder; Burrows, John P.

    2016-03-01

    Cloud top heights (CTHs) are retrieved for the period 1 January 2003 to 7 April 2012 using height-resolved limb spectra measured with the SCanning Imaging Absorption SpectroMeter for Atmospheric CHartographY (SCIAMACHY) on board ENVISAT (ENVIronmental SATellite). In this study, we present the retrieval code SCODA (SCIAMACHY cloud detection algorithm) based on a colour index method and test the accuracy of the retrieved CTHs in comparison to other methods. Sensitivity studies using the radiative transfer model SCIATRAN show that the method is capable of detecting cloud tops down to about 5 km and very thin cirrus clouds up to the tropopause. Volcanic particles can be detected that occasionally reach the lower stratosphere. Upper tropospheric ice clouds are observable for a nadir cloud optical thickness (COT) ≥ 0.01, which is in the subvisual range. This detection sensitivity decreases towards the lowermost troposphere. The COT detection limit for a water cloud top height of 5 km is roughly 0.1. This value is much lower than thresholds reported for passive cloud detection methods in nadir-viewing direction. Low clouds at 2 to 3 km can only be retrieved under very clean atmospheric conditions, as light scattering of aerosol particles interferes with the cloud particle scattering. We compare co-located SCIAMACHY limb and nadir cloud parameters that are retrieved with the Semi-Analytical CloUd Retrieval Algorithm (SACURA). Only opaque clouds (τN,c > 5) are detected with the nadir passive retrieval technique in the UV-visible and infrared wavelength ranges. Thus, due to the frequent occurrence of thin clouds and subvisual cirrus clouds in the tropics, larger CTH deviations are detected between both viewing geometries. Zonal mean CTH differences can be as high as 4 km in the tropics. The agreement in global cloud fields is sufficiently good. However, the land-sea contrast, as seen in nadir cloud occurrence frequency distributions, is not observed in limb geometry. Co-located cloud top height measurements of the limb-viewing Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) on ENVISAT are compared for the period from January 2008 to March 2012. The global CTH agreement of about 1 km is observed, which is smaller than the vertical field of view of both instruments. Lower stratospheric aerosols from volcanic eruptions occasionally interfere with the cloud retrieval and inhibit the detection of tropospheric clouds. The aerosol impact on cloud retrievals was studied for the volcanoes Kasatochi (August 2008), Sarychev Peak (June 2009), and Nabro (June 2011). Long-lasting aerosol scattering is detected after these events in the Northern Hemisphere for heights above 12.5 km in tropical and polar latitudes. Aerosol top heights up to about 22 km are found in 2009 and the enhanced lower stratospheric aerosol layer persisted for about 7 months. In August 2009 about 82 % of the lower stratosphere between 30 and 70° N was filled with scattering particles and nearly 50 % in October 2008.

  5. Millimeter Wave Atmospheric Radiometry Observations.

    DTIC Science & Technology

    1981-03-27

    structure of the atmosphere would be very important. Rufton [20] combined thermal sensor technology for microthermal measurements with radiosonde...fromT2 h n relationships with CT(h) at least for optical effects. Bufton obtained the mean-square temperature difference between two microthermal probes

  6. A Testosterone-Related Structural Brain Phenotype Predicts Aggressive Behavior From Childhood to Adulthood

    PubMed Central

    Nguyen, Tuong-Vi; McCracken, James T; Albaugh, Matthew D; Botteron, Kelly N.; Hudziak, James J; Ducharme, Simon

    2015-01-01

    Structural covariance, the examination of anatomic correlations between brain regions, has emerged recently as a valid and useful measure of developmental brain changes. Yet the exact biological processes leading to changes in covariance, and the relation between such covariance and behavior, remain largely unexplored. The steroid hormone testosterone represents a compelling mechanism through which this structural covariance may be developmentally regulated in humans. Although steroid hormone receptors can be found throughout the central nervous system, the amygdala represents a key target for testosterone-specific effects, given its high density of androgen receptors. In addition, testosterone has been found to impact cortical thickness (CTh) across the whole brain, suggesting that it may also regulate the structural relationship, or covariance, between the amygdala and CTh. Here we examined testosterone-related covariance between amygdala volumes and whole-brain CTh, as well as its relationship to aggression levels, in a longitudinal sample of children, adolescents, and young adults 6 to 22 years old. We found: (1) testosterone-specific modulation of the covariance between the amygdala and medial prefrontal cortex (mPFC); (2) a significant relationship between amygdala-mPFC covariance and levels of aggression; and (3) mediation effects of amygdala-mPFC covariance on the relationship between testosterone and aggression. These effects were independent of sex, age, pubertal stage, estradiol levels and anxious-depressed symptoms. These findings are consistent with prior evidence that testosterone targets the neural circuits regulating affect and impulse regulation, and show, for the first time in humans, how androgen-dependent organizational effects may regulate a very specific, aggression-related structural brain phenotype from childhood to young adulthood. PMID:26431805

  7. A testosterone-related structural brain phenotype predicts aggressive behavior from childhood to adulthood.

    PubMed

    Nguyen, Tuong-Vi; McCracken, James T; Albaugh, Matthew D; Botteron, Kelly N; Hudziak, James J; Ducharme, Simon

    2016-01-01

    Structural covariance, the examination of anatomic correlations between brain regions, has emerged recently as a valid and useful measure of developmental brain changes. Yet the exact biological processes leading to changes in covariance, and the relation between such covariance and behavior, remain largely unexplored. The steroid hormone testosterone represents a compelling mechanism through which this structural covariance may be developmentally regulated in humans. Although steroid hormone receptors can be found throughout the central nervous system, the amygdala represents a key target for testosterone-specific effects, given its high density of androgen receptors. In addition, testosterone has been found to impact cortical thickness (CTh) across the whole brain, suggesting that it may also regulate the structural relationship, or covariance, between the amygdala and CTh. Here, we examined testosterone-related covariance between amygdala volumes and whole-brain CTh, as well as its relationship to aggression levels, in a longitudinal sample of children, adolescents, and young adults 6-22 years old. We found: (1) testosterone-specific modulation of the covariance between the amygdala and medial prefrontal cortex (mPFC); (2) a significant relationship between amygdala-mPFC covariance and levels of aggression; and (3) mediation effects of amygdala-mPFC covariance on the relationship between testosterone and aggression. These effects were independent of sex, age, pubertal stage, estradiol levels and anxious-depressed symptoms. These findings are consistent with prior evidence that testosterone targets the neural circuits regulating affect and impulse regulation, and show, for the first time in humans, how androgen-dependent organizational effects may regulate a very specific, aggression-related structural brain phenotype from childhood to young adulthood. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Global aerosol effects on convective clouds

    NASA Astrophysics Data System (ADS)

    Wagner, Till; Stier, Philip

    2013-04-01

    Atmospheric aerosols affect cloud properties, and thereby the radiation balance of the planet and the water cycle. The influence of aerosols on clouds is dominated by increase of cloud droplet and ice crystal numbers (CDNC/ICNC) due to enhanced aerosols acting as cloud condensation and ice nuclei. In deep convective clouds this increase in CDNC/ICNC is hypothesised to increase precipitation because of cloud invigoration through enhanced freezing and associated increased latent heat release caused by delayed warm rain formation. Satellite studies robustly show an increase of cloud top height (CTH) and precipitation with increasing aerosol optical depth (AOD, as proxy for aerosol amount). To represent aerosol effects and study their influence on convective clouds in the global climate aerosol model ECHAM-HAM, we substitute the standard convection parameterisation, which uses one mean convective cloud for each grid column, with the convective cloud field model (CCFM), which simulates a spectrum of convective clouds, each with distinct values of radius, mixing ratios, vertical velocity, height and en/detrainment. Aerosol activation and droplet nucleation in convective updrafts at cloud base is the primary driver for microphysical aerosol effects. To produce realistic estimates for vertical velocity at cloud base we use an entraining dry parcel sub cloud model which is triggered by perturbations of sensible and latent heat at the surface. Aerosol activation at cloud base is modelled with a mechanistic, Köhler theory based, scheme, which couples the aerosols to the convective microphysics. Comparison of relationships between CTH and AOD, and precipitation and AOD produced by this novel model and satellite based estimates show general agreement. Through model experiments and analysis of the model cloud processes we are able to investigate the main drivers for the relationship between CTH / precipitation and AOD.

  9. Numerical Estimation of the Spent Fuel Ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindgren, Eric R.; Durbin, Samuel; Wilke, Jason

    Sabotage of spent nuclear fuel casks remains a concern nearly forty years after attacks against shipment casks were first analyzed and has a renewed relevance in the post-9/11 environment. A limited number of full-scale tests and supporting efforts using surrogate materials, typically depleted uranium dioxide (DUO 2 ), have been conducted in the interim to more definitively determine the source term from these postulated events. However, the validity of these large- scale results remain in question due to the lack of a defensible spent fuel ratio (SFR), defined as the amount of respirable aerosol generated by an attack on amore » mass of spent fuel compared to that of an otherwise identical surrogate. Previous attempts to define the SFR in the 1980's have resulted in estimates ranging from 0.42 to 12 and include suboptimal experimental techniques and data comparisons. Because of the large uncertainty surrounding the SFR, estimates of releases from security-related events may be unnecessarily conservative. Credible arguments exist that the SFR does not exceed a value of unity. A defensible determination of the SFR in this lower range would greatly reduce the calculated risk associated with the transport and storage of spent nuclear fuel in dry cask systems. In the present work, the shock physics codes CTH and ALE3D were used to simulate spent nuclear fuel (SNF) and DUO 2 targets impacted by a high-velocity jet at an ambient temperature condition. These preliminary results are used to illustrate an approach to estimate the respirable release fraction for each type of material and ultimately, an estimate of the SFR. This page intentionally blank« less

  10. Which bank? A guardian model for regulation of embryonic stem cell research in Australia.

    PubMed

    McLennan, A

    2007-08-01

    In late 2005 the Legislation Review: Prohibition of Human Cloning Act 2002 (Cth) and the Research Involving Human Embryos Act 2002 (Cth) recommended the establishment of an Australian stem cell bank. This article aims to address a lack of discussion of issues surrounding stem cell banking by suggesting possible answers to the questions of whether Australia should establish a stem cell bank and what its underlying philosophy and functions should be. Answers are developed through an analysis of regulatory, scientific and intellectual property issues relating to embryonic stem cell research in the United Kingdom, United States and Australia. This includes a detailed analysis of the United Kingdom Stem Cell Bank. It is argued that a "guardian" model stem cell bank should be established in Australia. This bank would aim to promote the maximum public benefit from human embryonic stem cell research by providing careful regulatory oversight and addressing ethical issues, while also facilitating research by addressing practical scientific concerns and intellectual property issues.

  11. Zapotec Simulations of Momentum Transfer for Impacts into Thin Aluminum Targets

    NASA Astrophysics Data System (ADS)

    Helminiak, Nathaniel; Sable, Peter; Gullerud, Arne; Hollenshead, Jeromy; Hertel, Gene

    2017-06-01

    The momentum transfers between small, 3.2 mm, aluminum spheres into thin aluminum targets was characterized utilizing the numerical solver, Zapotec, which couples the CTH hydrocode and a transient finite elements code, Sierra/SM. The results are compared to experimental work, conducted at the NASA Ames Research Center. Square 15 × 15cm2 aluminum targets ranged in thickness from 5 to 48.2 mm were impacted at a range of velocities from 1 to 9 km/s. From these tests, the components of spray and ejecta momentum, along the axis of impact, normal to the plate surface, were measured. Variations of hole diameter and target mass loss, with respect to initial projectile velocity, were also recorded. The data presented covers a range of phases corresponding to impact behavior ranging from inelastic collision, through spalling behavior, and ending with complete penetration. Sandia is a multiprogram laboratory, operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  12. Report of experiments and evidence for ASC L2 milestone 4467 : demonstration of a legacy application's path to exascale.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curry, Matthew L.; Ferreira, Kurt Brian; Pedretti, Kevin Thomas Tauke

    2012-03-01

    This report documents thirteen of Sandia's contributions to the Computational Systems and Software Environment (CSSE) within the Advanced Simulation and Computing (ASC) program between fiscal years 2009 and 2012. It describes their impact on ASC applications. Most contributions are implemented in lower software levels allowing for application improvement without source code changes. Improvements are identified in such areas as reduced run time, characterizing power usage, and Input/Output (I/O). Other experiments are more forward looking, demonstrating potential bottlenecks using mini-application versions of the legacy codes and simulating their network activity on Exascale-class hardware. The purpose of this report is to provemore » that the team has completed milestone 4467-Demonstration of a Legacy Application's Path to Exascale. Cielo is expected to be the last capability system on which existing ASC codes can run without significant modifications. This assertion will be tested to determine where the breaking point is for an existing highly scalable application. The goal is to stretch the performance boundaries of the application by applying recent CSSE RD in areas such as resilience, power, I/O, visualization services, SMARTMAP, lightweight LWKs, virtualization, simulation, and feedback loops. Dedicated system time reservations and/or CCC allocations will be used to quantify the impact of system-level changes to extend the life and performance of the ASC code base. Finally, a simulation of anticipated exascale-class hardware will be performed using SST to supplement the calculations. Determine where the breaking point is for an existing highly scalable application: Chapter 15 presented the CSSE work that sought to identify the breaking point in two ASC legacy applications-Charon and CTH. Their mini-app versions were also employed to complete the task. There is no single breaking point as more than one issue was found with the two codes. The results were that applications can expect to encounter performance issues related to the computing environment, system software, and algorithms. Careful profiling of runtime performance will be needed to identify the source of an issue, in strong combination with knowledge of system software and application source code.« less

  13. Constitutive modeling of the dynamic-tensile-extrusion test of PTFE

    NASA Astrophysics Data System (ADS)

    Resnyansky, A. D.; Brown, E. N.; Trujillo, C. P.; Gray, G. T.

    2017-01-01

    Use of polymers in defense, aerospace and industrial applications under extreme loading conditions makes prediction of the behavior of these materials very important. Crucial to this is knowledge of the physical damage response in association with phase transformations during loading and the ability to predict this via multi-phase simulation accounting for thermodynamical non-equilibrium and strain rate sensitivity. The current work analyzes Dynamic-Tensile-Extrusion (Dyn-Ten-Ext) experiments on polytetrafluoroethylene (PTFE). In particular, the phase transition during loading and subsequent tension are analyzed using a two-phase rate sensitive material model implemented in the CTH hydrocode. The calculations are compared with experimental high-speed photography. Deformation patterns and their link with changing loading modes are analyzed numerically and correlated to the test observations. It is concluded that the phase transformation is not as critical to the response of PTFE under Dyn-Ten-Ext loading as it is during the Taylor rod impact testing.

  14. Enhancements and Analysis of CTH Software for Underbody Blast

    DTIC Science & Technology

    2013-02-01

    authors expressed herein do not necessarily state or reflect those of the United States Government or the DoD, and shall not be used for advertising or...Trembelay, J., “Validation of a Loading Model for Simulating Blast Mine Effects on Armoured Vehicles,” 7th International LS-DYNA Users Conference

  15. Measurements of W Erosion using UV Emission from DIII-D and CTH

    NASA Astrophysics Data System (ADS)

    Johnson, Curtis; Ennis, David; Loch, Stuart; Balance, Connor; Victor, Brian; Allen, Steve; Samuell, Cameron; Abrams, Tyler; Unterberg, Ezekial

    2017-10-01

    of Plasma Facing Components (PFCs) will play a critical role in establishing the performance of reactor-relevant fusion devices, particularly for tungsten (W) divertor targets. Erosion can be diagnosed from spectral line emission together with atomic coefficients representing the `ionizations per photon' (S/XB). Emission from W I is most intense in the UV region. Thus, UV survey spectrometers (200-400 nm) are used to diagnose W PFCs erosion in the DIII-D divertor and from a W tipped probe in the CTH experiment. Nineteen W emission lines in the UV region are identified between the two experiments, allowing for multiple S/XB erosion measurements. Initial W erosion measurements are compared to erosion using the 400.9 nm W I line. Complete UV spectra will be presented and compared to synthetic spectra for varying plasma conditions. Analysis of the metastable states impact on the S/XB will be presented as well as possible electron temperature and density diagnosis from W I line ratios. Work supported by USDOE Grants DE-SC0015877 & DE-FC02-04ER54698.

  16. Effect of hindpaw electrical stimulation on capillary flow heterogeneity and oxygen delivery (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Li, Yuandong; Wei, Wei; Li, Chenxi; Wang, Ruikang K.

    2017-02-01

    We report a novel use of optical coherence tomography (OCT) based angiography to visualize and quantify dynamic response of cerebral capillary flow pattern in mice upon hindpaw electrical stimulation through the measurement of the capillary transit-time heterogeneity (CTH) and capillary mean transit time (MTT) in a wide dynamic range of a great number of vessels in vivo. The OCT system was developed to have a central wavelength of 1310 nm, a spatial resolution of 8 µm and a system dynamic range of 105 dB at an imaging rate of 92 kHz. The mapping of dynamic cerebral microcirculations was enabled by optical microangiography protocol. From the imaging results, the spatial homogenization of capillary velocity (decreased CTH) was observed in the region of interest (ROI) corresponding to the stimulation, along with an increase in the MTT in the ROI to maintain sufficient oxygen exchange within the brain tissue during functional activation. We validated the oxygen consumption due to an increase of the MTT through demonstrating an increase in the deoxygenated hemoglobin (HbR) during the stimulation by the use of laser speckle contrast imaging.

  17. Coupled Retrieval of Liquid Water Cloud and Above-Cloud Aerosol Properties Using the Airborne Multiangle SpectroPolarimetric Imager (AirMSPI)

    NASA Astrophysics Data System (ADS)

    Xu, Feng; van Harten, Gerard; Diner, David J.; Davis, Anthony B.; Seidel, Felix C.; Rheingans, Brian; Tosca, Mika; Alexandrov, Mikhail D.; Cairns, Brian; Ferrare, Richard A.; Burton, Sharon P.; Fenn, Marta A.; Hostetler, Chris A.; Wood, Robert; Redemann, Jens

    2018-03-01

    An optimization algorithm is developed to retrieve liquid water cloud properties including cloud optical depth (COD), droplet size distribution and cloud top height (CTH), and above-cloud aerosol properties including aerosol optical depth (AOD), single-scattering albedo, and microphysical properties from sweep-mode observations by Jet Propulsion Laboratory's Airborne Multiangle SpectroPolarimetric Imager (AirMSPI) instrument. The retrieval is composed of three major steps: (1) initial estimate of the mean droplet size distribution across the entire image of 80-100 km along track by 10-25 km across track from polarimetric cloudbow observations, (2) coupled retrieval of image-scale cloud and above-cloud aerosol properties by fitting the polarimetric data at all observation angles, and (3) iterative retrieval of 1-D radiative transfer-based COD and droplet size distribution at pixel scale (25 m) by establishing relationships between COD and droplet size and fitting the total radiance measurements. Our retrieval is tested using 134 AirMSPI data sets acquired during the National Aeronautics and Space Administration (NASA) field campaign ObseRvations of Aerosols above CLouds and their intEractionS. The retrieved above-cloud AOD and CTH are compared to coincident HSRL-2 (HSRL-2, NASA Langley Research Center) data, and COD and droplet size distribution parameters (effective radius reff and effective variance veff) are compared to coincident Research Scanning Polarimeter (RSP) (NASA Goddard Institute for Space Studies) data. Mean absolute differences between AirMSPI and HSRL-2 retrievals of above-cloud AOD at 532 nm and CTH are 0.03 and <0.5 km, respectively. At RSP's footprint scale ( 323 m), mean absolute differences between RSP and AirMSPI retrievals of COD, reff, and veff in the cloudbow area are 2.33, 0.69 μm, and 0.020, respectively. Neglect of smoke aerosols above cloud leads to an underestimate of image-averaged COD by 15%.

  18. Developmental effects of androgens in the human brain.

    PubMed

    Nguyen, T-V

    2018-02-01

    Neuroendocrine theories of brain development posit that androgens play a crucial role in sex-specific cortical growth, although little is known about the differential effects of testosterone and dehydroepiandrosterone (DHEA) on cortico-limbic development and cognition during adolescence. In this context, the National Institutes of Health Study of Normal Brain Development, a longitudinal study of typically developing children and adolescents aged 4-24 years (n=433), offers a unique opportunity to examine the developmental effects of androgens on cortico-limbic maturation and cognition. Using data from this sample, our group found that higher testosterone levels were associated with left-sided decreases in cortical thickness (CTh) in post-pubertal boys, particularly in the prefrontal cortex, compared to right-sided increases in CTh in somatosensory areas in pre-pubertal girls. Prefrontal-amygdala and prefrontal-hippocampal structural covariance (considered to reflect structural connectivity) also varied according to testosterone levels, with the testosterone-related brain phenotype predicting higher aggression levels and lower executive function, particularly in boys. By contrast, DHEA was associated with a pre-pubertal increase in CTh of several regions involved in cognitive control in both boys and girls. Covariance within several cortico-amygdalar structural networks also varied as a function of DHEA levels, with the DHEA-related brain phenotype predicting improvements in visual attention in both boys and girls. DHEA-related cortico-hippocampal structural covariance, on the other hand, predicted higher scores on a test of working memory. Interestingly, there were significant interactions between testosterone and DHEA, such that DHEA tended to mitigate the anti-proliferative effects of testosterone on brain structure. In sum, testosterone-related effects on the developing brain may lead to detrimental effects on cortical functions (ie, higher aggression and lower executive function), whereas DHEA-related effects may optimise cortical functions (ie, better attention and working memory), perhaps by decreasing the influence of amygdalar and hippocampal afferents on cortical functions. © 2017 British Society for Neuroendocrinology.

  19. Atmospheric ammonia measurements at low concentration sites in the northeastern USA: implications for total nitrogen deposition and comparison with CMAQ estimates

    EPA Science Inventory

    We evaluated the relative importance of dry deposition of ammonia (NH3) gas at several headwater areas of the Susquehanna River, the largest single source of nitrogen pollution to Chesapeake Bay, including three that are remote from major sources of NH3 emissions (CTH, ARN, and K...

  20. The Validation of Cloud Retrieval Algorithms Using Synthetic Datasets

    NASA Astrophysics Data System (ADS)

    Kokhanovsky, Alexander; Fischer, Jurgen; Linstrot, Rasmus; Meirink, Jan Fokke; Poulsen, Caroline; Preusker, Rene; Siddans, Richard; Thomas, Gareth; Arnold, Chris; Grainger, Roy; Lilli, Luca; Rozanov, Vladimir

    2012-11-01

    We have performed the inter-comparison study of cloud property retrievals using algorithms initially developed for AATSR (ORAC, RAL-Oxford University), AVHRR and SEVIRI (CPP, KNMI), SCIAMACHY/GOME (SACURA, University of Bremen), and MERIS (ANNA, Free University of Berlin). The accuracy of retrievals of cloud optical thickness (COT), effective radius (ER) of droplets, and cloud top height (CTH) is discussed.

  1. Effect of Shockwave Curvature on Run Distance Observed with a Modified Wedge Test

    NASA Astrophysics Data System (ADS)

    Lee, Richard; Dorgan, Robert; Sutherland, Gerrit; Benedetta, Ashley; Milby, Christopher

    2011-06-01

    The effect of wave curvature on shock initiation in PBXN-110 was investigated using a modified wedge test configuration. Various thicknesses of PBXN-110 donor slabs were used to define the shockwave curvature introduced to wedge samples of the same explosive. The donor slabs were initiated with line-wave generators so that the introduced shock would be the same shape, magnitude and duration across the entire input surface of the wedge. The shock parameters were varied for a given donor thickness via different widths of PMMA spacers placed between the donor and the wedge. A framing camera was used to observe where initiation occurred along the face of the wedge. Initiation always occurred at the center of the shock front instead of the sides like that reported by others using a much smaller test format. Results were compared to CTH calculations to indicate if there were effects associated with highly curved shock fronts that could not be adequately predicted. The run distance predicted in CTH for a 50.8 mm thick donor slab (low curvature) compared favorably with experimental results. However, results from thinner donor slabs (higher curvature) indicate a more sensitive behavior than the simulations predicted.

  2. Effect of shockwave curvature on run distance observed with a modified wedge test

    NASA Astrophysics Data System (ADS)

    Lee, Richard; Dorgan, Robert J.; Sutherland, Gerrit; Benedetta, Ashley; Milby, Christopher

    2012-03-01

    The effect of wave curvature on shock initiation in PBXN-110 was investigated using a modified wedge test configuration. Various widths of PBXN-110 donor slabs were used to define the shockwave curvature introduced to wedge samples of the same explosive. The donor slabs were initiated with line-wave generators so that the shock from the donor would be the same shape, magnitude and duration across the entire input surface of the wedge. The shock parameters were varied for a given donor with PMMA spacers placed between the donor and the wedge sample. A high-speed electronic framing camera was used to observe where initiation occurred along the face of the wedge. Initiation always occurred at the center of the shock front instead of along the sides like that reported by others using a much smaller test format. Results were compared to CTH calculations to indicate if there were effects associated with highly curved shock fronts that could not be adequately predicted. The run distance predicted in CTH for a 50.8 mm wide donor slab (low curvature) compared favorably with experimental results. However, results from thinner donor slabs (higher curvature) indicate a more sensitive behavior than the simulations predicted.

  3. Cortical thickness and surface area correlates with cognitive dysfunction among first-episode psychosis patients.

    PubMed

    Haring, L; Müürsepp, A; Mõttus, R; Ilves, P; Koch, K; Uppin, K; Tarnovskaja, J; Maron, E; Zharkovsky, A; Vasar, E; Vasar, V

    2016-07-01

    In studies using magnetic resonance imaging (MRI), some have reported specific brain structure-function relationships among first-episode psychosis (FEP) patients, but findings are inconsistent. We aimed to localize the brain regions where cortical thickness (CTh) and surface area (cortical area; CA) relate to neurocognition, by performing an MRI on participants and measuring their neurocognitive performance using the Cambridge Neuropsychological Test Automated Battery (CANTAB), in order to investigate any significant differences between FEP patients and control subjects (CS). Exploration of potential correlations between specific cognitive functions and brain structure was performed using CANTAB computer-based neurocognitive testing and a vertex-by-vertex whole-brain MRI analysis of 63 FEP patients and 30 CS. Significant correlations were found between cortical parameters in the frontal, temporal, cingular and occipital brain regions and performance in set-shifting, working memory manipulation, strategy usage and sustained attention tests. These correlations were significantly dissimilar between FEP patients and CS. Significant correlations between CTh and CA with neurocognitive performance were localized in brain areas known to be involved in cognition. The results also suggested a disrupted structure-function relationship in FEP patients compared with CS.

  4. A projected decrease in lightning under climate change

    NASA Astrophysics Data System (ADS)

    Finney, Declan L.; Doherty, Ruth M.; Wild, Oliver; Stevenson, David S.; MacKenzie, Ian A.; Blyth, Alan M.

    2018-03-01

    Lightning strongly influences atmospheric chemistry1-3, and impacts the frequency of natural wildfires4. Most previous studies project an increase in global lightning with climate change over the coming century1,5-7, but these typically use parameterizations of lightning that neglect cloud ice fluxes, a component generally considered to be fundamental to thunderstorm charging8. As such, the response of lightning to climate change is uncertain. Here, we compare lightning projections for 2100 using two parameterizations: the widely used cloud-top height (CTH) approach9, and a new upward cloud ice flux (IFLUX) approach10 that overcomes previous limitations. In contrast to the previously reported global increase in lightning based on CTH, we find a 15% decrease in total lightning flash rate with IFLUX in 2100 under a strong global warming scenario. Differences are largest in the tropics, where most lightning occurs, with implications for the estimation of future changes in tropospheric ozone and methane, as well as differences in their radiative forcings. These results suggest that lightning schemes more closely related to cloud ice and microphysical processes are needed to robustly estimate future changes in lightning and atmospheric composition.

  5. Identifying and Determining Halocarbons in Water Using Headspace Gas Chromatography.

    DTIC Science & Technology

    1981-10-01

    chromatography Halogenated hydrocarbons , / Wastewater 26. T -ACT C’Cth.I .- ,,ee .- ,ncee, ,Id ntify y block number) --/,*Since the discovery that...USING HEADSPACE GAS CHROMATOGRAPHY Daniel C. Leggett INTRODUCTION Chlorination is a well-established method of disinfecting water for drinking and of... disinfecting municipal wastewater prior to disposal. The recent discovery that persistent chloro-organic molecules are formed in this processi 5 has

  6. XML Technology Assessment

    DTIC Science & Technology

    2001-01-01

    System (GCCS) Track Database Management System (TDBM) (3) GCCS Integrated Imagery and Intelligence (3) Intelligence Shared Data Server (ISDS) General ...The CTH is a powerful model that will allow more than just message systems to exchange information. It could be used for object-oriented databases, as...of the Naval Integrated Tactical Environmental System I (NITES I) is used as a case study to demonstrate the utility of this distributed component

  7. Effects of forest management on soil carbon: results of some long-term resampling studies

    Treesearch

    D.W. Johnson; Jennifer D. Knoepp; Wayne T. Swank; J. Shan; L.A. Morris; David H. D.H. van Lear; P.R. Kapeluck

    2002-01-01

    The effects of harvest intensity (sawlog, SAW; whole tree, WTH; and complete tree, CTH) on biomass and soil carbon (C) were studied in four forested sites in the Southeastern United States: (mixed deciduous forests at Oak Ridge, TN and Coweeta, NC; Pinus taeda at Clemson, SC; and P. eliottii at Bradford, FL). In general, harvesting had no lasting...

  8. First-Principle Simulation of Blast Barrier Effectiveness for the Development of Simplified Design Tools

    DTIC Science & Technology

    2010-12-01

    Simulation of Free -Field Blast ........................................................................45 27. (a) Peak Incident Pressure and (b...several types of problems involving blast propagation. Mastin et al. (1995) compared CTH simulations to free -field incident pressure as predicted by...a measure of accuracy and efficiency. To provide this direct comparison, a series of 2D-axisymmetric free -field air blast simulations were

  9. Validation of VIIRS Cloud Base Heights at Night Using Ground and Satellite Measurements over Alaska

    NASA Astrophysics Data System (ADS)

    NOH, Y. J.; Miller, S. D.; Seaman, C.; Forsythe, J. M.; Brummer, R.; Lindsey, D. T.; Walther, A.; Heidinger, A. K.; Li, Y.

    2016-12-01

    Knowledge of Cloud Base Height (CBH) is critical to describing cloud radiative feedbacks in numerical models and is of practical significance to aviation communities. We have developed a new CBH algorithm constrained by Cloud Top Height (CTH) and Cloud Water Path (CWP) by performing a statistical analysis of A-Train satellite data. It includes an extinction-based method for thin cirrus. In the algorithm, cloud geometric thickness is derived with upstream CTH and CWP input and subtracted from CTH to generate the topmost layer CBH. The CBH information is a key parameter for an improved Cloud Cover/Layers product. The algorithm has been applied to the Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi NPP spacecraft. Nighttime cloud optical properties for CWP are retrieved from the nighttime lunar cloud optical and microphysical properties (NLCOMP) algorithm based on a lunar reflectance model for the VIIRS Day/Night Band (DNB) measuring nighttime visible light such as moonlight. The DNB has innovative capabilities to fill the polar winter and nighttime gap of cloud observations which has been an important shortfall from conventional radiometers. The CBH products have been intensively evaluated against CloudSat data. The results showed the new algorithm yields significantly improved performance over the original VIIRS CBH algorithm. However, since CloudSat is now operational during daytime only due to a battery anomaly, the nighttime performance has not been fully assessed. This presentation will show our approach to assess the performance of the CBH algorithm at night. VIIRS CBHs are retrieved over the Alaska region from October 2015 to April 2016 using the Clouds from AVHRR Extended (CLAVR-x) processing system. Ground-based measurements from ceilometer and micropulse lidar at the Atmospheric Radiation Measurement (ARM) site on the North Slope of Alaska are used for the analysis. Local weather conditions are checked using temperature and precipitation observations at the site. CALIPSO data with near-simultaneous colocation are added for multi-layered cloud cases which may have high clouds aloft beyond the ground measurements. Multi-month statistics of performance and case studies will be shown. Additional efforts for algorithm refinements will be also discussed.

  10. Impact erosion of planetary atmospheres

    NASA Astrophysics Data System (ADS)

    Shuvalov, Valery

    1999-06-01

    The problem of planetary atmospheres evolution due to impacts of large cosmic bodies was investigated by Ahrens, O'Keefe, Cameron, Hunten and others. These studies were focused mainly on the atmosphere growth under impact devolatilization and atmosphere losses due to escape of high velocity ejecta. Most of the results concerning atmosphere erosion were based on assumption that atmosphere itself does not influence significantly on the ejecta evolution. However more detailed investigations show that atmospheric drag is important at least for 1-10km impactors. From the other hand the theory of large explosions in an exponential atmosphere is not applicable in the case under consideration because of the influence of a trail created during the body flight through the atmosphere. In the present study the problem of 1-10km asteroid impacts against the Earth is investigated with the use of multi-material hydrocode SOVA. This code is similar to the widely used CTH system and allows to model all stages of the impact (penetration into the atmosphere, collision with the ground surface covered by water basin, ejecta evolution). The air mass ejected from each altitude depending on impactor size and velocity is determined. Apart from the impacts into the present-day atmosphere, the erosion of the dense Proto-Atmosphere is also considered.

  11. Overview of Compact Toroidal Hybrid research program progress and plans

    NASA Astrophysics Data System (ADS)

    Maurer, David; Ennis, David; Hanson, James; Hartwell, Gregory; Herfindal, Jeffrey; Knowlton, Stephen; Ma, Xingxing; Pandya, Mihir; Roberds, Nicholas; Ross, Kevin; Traverso, Peter

    2016-10-01

    disruptive behavior on the level of applied 3D magnetic shaping; (2) test and advance the V3FIT reconstruction code and NIMROD modeling of CTH; and (3) study the implementation of an island divertor. Progress towards these goals and other developments are summarized. The disruptive density limit exceeds the Greenwald limit as the vacuum transform is increased, but a threshold for avoidance is not observed. Low- q disruptions, with 1.1 < q (a) <2.0, cease to occur if the vacuum transform is raised above 0.07. Application of vacuum transform can reduce and eliminate the vertical drift of elongated discharges that would otherwise be vertically unstable. Reconstructions using external magnetics give accurate estimates for quantities near the plasma boundary, and internal diagnostics have been implemented to extend the range of accuracy into the plasma core. Sawtooth behavior has been reproducibly modified with external transform and NIMROD is used to model these observations and reproduces experimental trends. An island divertor design has begun with connection length studies to model energy deposition on divertor plates located in an edge 1/3 island as well as the study of a non-resonant divertor configuration. This work is supported by U.S. Department of Energy Grant No. DE-FG02-00ER54610.

  12. Air Blast Calculations

    DTIC Science & Technology

    2013-07-01

    composition C-4 (C4), a polymer-bonded explosive (PBXN-109), and nitromethane (NM). Each charge diameter (CD) is assumed to be 17.46 cm (equivalent to a 10-lb... explosive detonates, the rapid expansion of reaction gases generates a shock wave that propagates into the surrounding medium. The pressure history at a...spherical explosive charge suspended in air. A comparison of the results obtained using CTH are made to ones generated using the Friedlander

  13. Overview of recent results and future plans on the Compact Toroidal Hybrid experiment

    NASA Astrophysics Data System (ADS)

    Maurer, D. A.; Archmiller, M. C.; Cianciosa, M. R.; Ennis, D. A.; Hanson, J. D.; Hartwell, G. J.; Hebert, J. D.; Herfindal, J. L.; Knowlton, S. F.; Ma, X.; Massidda, S.; Pandya, M. D.; Roberds, N. A.; Traverso, P. J.

    2015-11-01

    Goals of the Compact Toroidal Hybrid (CTH) experiment are to: (1) investigate the dependence of plasma disruptive behavior on the level of applied 3D magnetic shaping, (2) test and advance 3D computational modeling tools in strongly shaped plasmas, and (3) study the implementation of a new island divertor. Progress towards these goals and other developments are summarized. The disruptive density limit is observed to exceed the Greenwald limit as the vacuum transform is increased, but a threshold for disruption avoidance is not observed. Low q operation is routine, with low q disruptions avoided when the vacuum transform is raised to the value of 0.07 or above. Application of vacuum transform has been demonstrated to reduce and eliminate the vertical drift of elongated discharges that would otherwise be vertically unstable. Current efforts at improved equilibrium reconstruction and diagnostic development will beoverviewed. NIMROD is used to model the current ramp phase of CTH and 3D shaped sawtooth behavior. An island divertor design has begun with connection length studies and initial EMC3-Eirene results to model energy deposition on divertor plates located in an edge 1/3 island. This work is supported by U.S. Department of Energy Grant No. DE- FG02-00ER54610.

  14. Investigation of the leading and subleading high-energy behavior of hadron-hadron total cross sections using a best-fit analysis of hadronic scattering data

    NASA Astrophysics Data System (ADS)

    Giordano, M.; Meggiolaro, E.; Silva, P. V. R. G.

    2017-08-01

    In the present investigation we study the leading and subleading high-energy behavior of hadron-hadron total cross sections using a best-fit analysis of hadronic scattering data. The parametrization used for the hadron-hadron total cross sections at high energy is inspired by recent results obtained by Giordano and Meggiolaro [J. High Energy Phys. 03 (2014) 002, 10.1007/JHEP03(2014)002] using a nonperturbative approach in the framework of QCD, and it reads σtot˜B ln2s +C ln s ln ln s . We critically investigate if B and C can be obtained by means of best-fits to data for proton-proton and antiproton-proton scattering, including recent data obtained at the LHC, and also to data for other meson-baryon and baryon-baryon scattering processes. In particular, following the above-mentioned nonperturbative QCD approach, we also consider fits where the parameters B and C are set to B =κ Bth and C =κ Cth, where Bth and Cth are universal quantities related to the QCD stable spectrum, while κ (treated as an extra free parameter) is related to the asymptotic value of the ratio σel/σtot. Different possible scenarios are then considered and compared.

  15. Development of an Expression Vector to Overexpress or Downregulate Genes in Curvularia protuberata.

    PubMed

    Liu, Chengke; Cleckler, Blake; Morsy, Mustafa

    2018-05-05

    Curvularia protuberata , an endophytic fungus in the Ascomycota, provides plants with thermotolerance only when it carries a mycovirus known as Curvularia thermotolerance virus (CThTV), and forms a three-way symbiotic relationship among these organisms. Under heat stress, several genes are expressed differently between virus-free C. protuberata (VF) and C. protuberata carrying CThTV (AN). We developed an expression vector, pM2Z-fun, carrying a zeocin resistance gene driven by the ToxA promoter, to study gene functions in C. protuberata to better understand this three-way symbiosis. Using this new 3.7-kb vector, five genes that are differentially expressed in C. protuberata —including genes involved in the trehalose, melanin, and catalase biosynthesis pathways—were successfully overexpressed or downregulated in VF or AN C. protuberata strains, respectively. The VF overexpression lines showed higher metabolite and enzyme activity than in the control VF strain. Furthermore, downregulation of expression of the same genes in the AN strain resulted in lower metabolite and enzyme activity than in the control AN strain. The newly generated expression vector, pM2Z-fun, has been successfully used to express target genes in C. protuberata and will be useful in further functional expression studies in other Ascomycota fungi.

  16. Effect of High Intensity Interval and Continuous Swimming Training on Body Mass Adiposity Level and Serum Parameters in High-Fat Diet Fed Rats.

    PubMed

    da Rocha, Guilherme L; Crisp, Alex H; de Oliveira, Maria R M; da Silva, Carlos A; Silva, Jadson O; Duarte, Ana C G O; Sene-Fiorese, Marcela; Verlengia, Rozangela

    2016-01-01

    This study aimed to investigate the effects of interval and continuous training on the body mass gain and adiposity levels of rats fed a high-fat diet. Forty-eight male Sprague-Dawley rats were randomly divided into two groups, standard diet and high-fat diet, and received their respective diets for a period of four weeks without exercise stimuli. After this period, the animals were randomly divided into six groups (n = 8): control standard diet (CS), control high-fat diet (CH), continuous training standard diet (CTS), continuous training high-fat diet (CTH), interval training standard diet (ITS), and interval training high-fat diet (ITH). The interval and continuous training consisted of a swimming exercise performed over eight weeks. CH rats had greater body mass gain, sum of adipose tissues mass, and lower serum high density lipoprotein values than CS. The trained groups showed lower values of feed intake, caloric intake, body mass gain, and adiposity levels compared with the CH group. No significant differences were observed between the trained groups (CTS versus ITS and CTH versus ITH) on body mass gains and adiposity levels. In conclusion, both training methodologies were shown to be effective in controlling body mass gain and adiposity levels in high-fat diet fed rats.

  17. Three-dimensional characterisation and simulation of deformation and damage during Taylor impact in PTFE

    NASA Astrophysics Data System (ADS)

    Resnyansky, A.; McDonald, S.; Withers, P.; Bourne, N.; Millett, J.; Brown, E.; Rae, P.

    2013-06-01

    Aerospace, defence and automotive applications of polymers and polymer matrix composites have placed these materials under increasingly more extreme conditions. It is therefore important to understand the mechanical response of these multi-phase materials under high pressures and strain rates. Crucial to this is knowledge of the physical damage response in association with the phase transformations during the loading and the ability to predict this via multi-phase simulation taking the thermodynamical non-equilibrium and strain rate sensitivity into account. The current work presents Taylor impact experiments interrogating the effect of dynamic, high-pressure loading on polytetrafluoroethylene (PTFE). In particular, X-ray microtomography has been used to characterise the damage imparted to cylindrical samples due to impact at different velocities. Distinct regions of deformation are present and controlled by fracture within the polymer, with the extent of the deformed region and increasing propagation of the fractures from the impact face showing a clear trend with increase in impact velocity. The experimental observations are discussed with respect to parallel multi-phase model predictions by CTH hydrocode of the shock response from Taylor impact simulations.

  18. Time-Resolved Electronic Relaxation Processes in Self-Organized Quantum Dots

    DTIC Science & Technology

    2005-05-16

    in a quantum dot infrared photodetector ,” paper CthM11, presented at CLEO, Baltimore, 2003. K. Kim, T. Norris, J. Singh, P. Bhattacharya...nanostructures have been equally spectacular. Following the development of quantum-well infrared photodetectors in the late 1980’s and early 90’s...4]. The quantum cascade laser is of course the best known of the new devices, as it constitutes an entirely new concept in semiconductor laser

  19. Seeing Off the Bear: Anglo-American Air Power Cooperation During the Cold War,

    DTIC Science & Technology

    1995-01-01

    Force History and Museums Program United States Air Force Washington, D.C. 1995 .ApprovAd fox r’Thiic r ~elease, j,, 117-~ I 7,CTh D Disbution...Air Power History Symposium began in late 1992 under the direction of General Bryce Poe II, President of the Air Force Historical Foundation, Air...vii Introduction and Welcome General Bryce Poe II; Air Marshal Sir Frederick Sowrey .............. 3 Opening

  20. Evaluation of the Material Point Method within CTH to Model 2-Dimensional Plate Impact Problems

    DTIC Science & Technology

    2014-09-01

    Howard University . 14. ABSTRACT The material point method (MPM) is a mixed Eulerian and Lagrangian computational method that allows for the... University in Washington, DC, as a second-year graduate student within mechanical engineering. I also attended Howard University for my undergraduate...Kevin Rugirello, Dr Andrew Tonge, Dr Jeffrey Lloyd, Dr Mary Jane Graham, and Dr Gbadebo Owolabi. vi Student Bio I am currently attending Howard

  1. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be... basic color for designating caution and for marking physical hazards such as: Striking against...

  2. FEMA Asteroid Impact Tabletop Exercise Simulations

    DOE PAGES

    Boslough, Mark; Jennings, Barbara; Carvey, Brad; ...

    2015-05-19

    We describe the computational simulations and damage assessments that we provided in support of a tabletop exercise (TTX) at the request of NASA's Near-Earth Objects Program Office. The overall purpose of the exercise was to assess leadership reactions, information requirements, and emergency management responses to a hypothetical asteroid impact with Earth. The scripted exercise consisted of discovery, tracking, and characterization of a hypothetical asteroid; inclusive of mission planning, mitigation, response, impact to population, infrastructure and GDP, and explicit quantification of uncertainty. Participants at the meeting included representatives of NASA, Department of Defense, Department of State, Department of Homeland Security/Federal Emergencymore » Management Agency (FEMA), and the White House. The exercise took place at FEMA headquarters. Sandia's role was to assist the Jet Propulsion Laboratory (JPL) in developing the impact scenario, to predict the physical effects of the impact, and to forecast the infrastructure and economic losses. We ran simulations using Sandia's CTH hydrocode to estimate physical effects on the ground, and to produce contour maps indicating damage assessments that could be used as input for the infrastructure and economic models. We used the FASTMap tool to provide estimates of infrastructure damage over the affected area, and the REAcct tool to estimate the potential economic severity expressed as changes to GDP (by nation, region, or sector) due to damage and short-term business interruptions.« less

  3. FEMA Asteroid Impact Tabletop Exercise Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boslough, Mark; Jennings, Barbara; Carvey, Brad

    We describe the computational simulations and damage assessments that we provided in support of a tabletop exercise (TTX) at the request of NASA's Near-Earth Objects Program Office. The overall purpose of the exercise was to assess leadership reactions, information requirements, and emergency management responses to a hypothetical asteroid impact with Earth. The scripted exercise consisted of discovery, tracking, and characterization of a hypothetical asteroid; inclusive of mission planning, mitigation, response, impact to population, infrastructure and GDP, and explicit quantification of uncertainty. Participants at the meeting included representatives of NASA, Department of Defense, Department of State, Department of Homeland Security/Federal Emergencymore » Management Agency (FEMA), and the White House. The exercise took place at FEMA headquarters. Sandia's role was to assist the Jet Propulsion Laboratory (JPL) in developing the impact scenario, to predict the physical effects of the impact, and to forecast the infrastructure and economic losses. We ran simulations using Sandia's CTH hydrocode to estimate physical effects on the ground, and to produce contour maps indicating damage assessments that could be used as input for the infrastructure and economic models. We used the FASTMap tool to provide estimates of infrastructure damage over the affected area, and the REAcct tool to estimate the potential economic severity expressed as changes to GDP (by nation, region, or sector) due to damage and short-term business interruptions.« less

  4. The Mystery Behind the Code: Differentiated Instruction with Quick Response Codes in Secondary Physical Education

    ERIC Educational Resources Information Center

    Adkins, Megan; Wajciechowski, Misti R.; Scantling, Ed

    2013-01-01

    Quick response codes, better known as QR codes, are small barcodes scanned to receive information about a specific topic. This article explains QR code technology and the utility of QR codes in the delivery of physical education instruction. Consideration is given to how QR codes can be used to accommodate learners of varying ability levels as…

  5. CORESAFE: A Formal Approach against Code Replacement Attacks on Cyber Physical Systems

    DTIC Science & Technology

    2018-04-19

    AFRL-AFOSR-JP-TR-2018-0035 CORESAFE:A Formal Approach against Code Replacement Attacks on Cyber Physical Systems Sandeep Shukla INDIAN INSTITUTE OF...Formal Approach against Code Replacement Attacks on Cyber Physical Systems 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA2386-16-1-4099 5c.  PROGRAM ELEMENT...Institute of Technology Kanpur India Final Report for AOARD Grant “CORESAFE: A Formal Approach against Code Replacement Attacks on Cyber Physical

  6. A cross-sectional survey to evaluate knowledge, attitudes and practices (KAP) regarding seasonal influenza vaccination among European travellers to resource-limited destinations.

    PubMed

    Pfeil, Alena; Mütsch, Margot; Hatz, Christoph; Szucs, Thomas D

    2010-07-07

    Influenza is one of the most common vaccine-preventable diseases in travellers. By performing two cross-sectional questionnaire surveys during winter 2009 and winter 2010 among European travellers to resource-limited destinations, we aimed to investigate knowledge, attitudes and practices (KAP) regarding seasonal influenza vaccination. Questionnaires were distributed in the waiting room to the visitors of the University of Zurich Centre for Travel' Health (CTH) in January and February 2009 and January 2010 prior to travel health counselling (CTH09 and CTH10). Questions included demographic data, travel-related characteristics and KAP regarding influenza vaccination. Data were analysed by using SPSS version 14.0 for Windows. Differences in proportions were compared using the Chi-square test and the significance level was set at p 64 yrs (25, 21%) and recommendations of the family physician (27, 22.7%) were the most often reported reasons for being vaccinated. In the multiple logistic regression analyses of the pooled data increasing age (OR = 1.03, 95% CI 1.01 - 1.04), a business trip (OR = 0.39, 95% CI 0.17 - 0.92) and seasonal influenza vaccination in the previous winter seasons (OR = 12.91, 95% CI 8.09 - 20.58) were independent predictors for seasonal influenza vaccination in 2009 or 2010.Influenza vaccination recommended by the family doctor (327, 37.7%), travel to regions with known high risk of influenza (305, 35.1%), and influenza vaccination required for job purposes (233, 26.8%) were most frequently mentioned to consider influenza vaccination. Risk perception and vaccination coverage concerning seasonal and pandemic influenza was very poor among travellers to resource-limited destinations when compared to traditional at-risk groups. Previous access to influenza vaccination substantially facilitated vaccinations in the subsequent year. Information strategies about influenza should be intensified and include health professionals, e.g. family physicians, travel medicine practitioners and business enterprises.

  7. Structural variability of E. coli thioredoxin captured in the crystal structures of single-point mutants

    PubMed Central

    Noguera, Martín E.; Vazquez, Diego S.; Ferrer-Sueta, Gerardo; Agudelo, William A.; Howard, Eduardo; Rasia, Rodolfo M.; Manta, Bruno; Cousido-Siah, Alexandra; Mitschler, André; Podjarny, Alberto; Santos, Javier

    2017-01-01

    Thioredoxin is a ubiquitous small protein that catalyzes redox reactions of protein thiols. Additionally, thioredoxin from E. coli (EcTRX) is a widely-used model for structure-function studies. In a previous paper, we characterized several single-point mutants of the C-terminal helix (CTH) that alter global stability of EcTRX. However, spectroscopic signatures and enzymatic activity for some of these mutants were found essentially unaffected. A comprehensive structural characterization at the atomic level of these near-invariant mutants can provide detailed information about structural variability of EcTRX. We address this point through the determination of the crystal structures of four point-mutants, whose mutations occurs within or near the CTH, namely L94A, E101G, N106A and L107A. These structures are mostly unaffected compared with the wild-type variant. Notably, the E101G mutant presents a large region with two alternative traces for the backbone of the same chain. It represents a significant shift in backbone positions. Enzymatic activity measurements and conformational dynamics studies monitored by NMR and molecular dynamic simulations show that E101G mutation results in a small effect in the structural features of the protein. We hypothesize that these alternative conformations represent samples of the native-state ensemble of EcTRX, specifically the magnitude and location of conformational heterogeneity. PMID:28181556

  8. Chronotype differences in cortical thickness: grey matter reflects when you go to bed.

    PubMed

    Rosenberg, Jessica; Jacobs, Heidi I L; Maximov, Ivan I; Reske, Martina; Shah, N J

    2018-06-15

    Based on individual circadian cycles and associated cognitive rhythms, humans can be classified via standardised self-reports as being early (EC), late (LC) and intermediate (IC) chronotypes. Alterations in neural cortical structure underlying these chronotype differences have rarely been investigated and are the scope of this study. 16 healthy male ECs, 16 ICs and 16 LCs were measured with a 3 T MAGNETOM TIM TRIO (Siemens, Erlangen) scanner using a magnetization prepared rapid gradient echo sequence. Data were analysed by applying voxel-based morphometry (VBM) and vertex-wise cortical thickness (CTh) analysis. VBM analysis revealed that ECs showed significantly lower grey matter volumes bilateral in the lateral occipital cortex and the precuneus as compared to LCs, and in the right lingual gyrus, occipital fusiform gyrus and the occipital pole as compared to ICs. CTh findings showed lower grey matter volumes for ECs in the left anterior insula, precuneus, inferior parietal cortex, and right pars triangularis than for LCs, and in the right superior parietal gyrus than for ICs. These findings reveal that chronotype differences are associated with specific neural substrates of cortical thickness, surface areas, and folding. We conclude that this might be the basis for chronotype differences in behaviour and brain function. Furthermore, our results speak for the necessity of considering "chronotype" as a potentially modulating factor in all kinds of structural brain-imaging experiments.

  9. How theories became knowledge: Morgan's chromosome theory of heredity in America and Britain.

    PubMed

    Brush, Stephen G

    2002-01-01

    T. H. Morgan, A. H. Sturtevant, H. J. Muller and C. B. Bridges published their comprehensive treatise The Mechanism of Mendelian Heredity in 1915. By 1920 Morgan's "Chromosome Theory of Heredity" was generally accepted by geneticists in the United States, and by British geneticists by 1925. By 1930 it had been incorporated into most general biology, botany, and zoology textbooks as established knowledge. In this paper, I examine the reasons why it was accepted as part of a series of comparative studies of theory-acceptance in the sciences. In this context it is of interest to look at the persuasiveness of confirmed novel predictions, a factor often regarded by philosophers of science as the most important way to justify a theory. Here it turns out to play a role in the decision of some geneticists to accept the theory, but is generally less important than the CTH's ability to explain Mendelian inheritance, sex-linked inheritance, non-disjunction, and the connection between linkage groups and the number of chromosome pairs; in other words, to establish a firm connection between genetics and cytology. It is remarkable that geneticists were willing to accept the CTH as applicable to all organisms at a time when it had been confirmed only for Drosophila. The construction of maps showing the location on the chromosomes of genes for specific characters was especially convincing for non-geneticists.

  10. Effect of High Intensity Interval and Continuous Swimming Training on Body Mass Adiposity Level and Serum Parameters in High-Fat Diet Fed Rats

    PubMed Central

    da Rocha, Guilherme L.; Crisp, Alex H.; de Oliveira, Maria R. M.; da Silva, Carlos A.; Silva, Jadson O.; Duarte, Ana C. G. O.; Sene-Fiorese, Marcela; Verlengia, Rozangela

    2016-01-01

    This study aimed to investigate the effects of interval and continuous training on the body mass gain and adiposity levels of rats fed a high-fat diet. Forty-eight male Sprague-Dawley rats were randomly divided into two groups, standard diet and high-fat diet, and received their respective diets for a period of four weeks without exercise stimuli. After this period, the animals were randomly divided into six groups (n = 8): control standard diet (CS), control high-fat diet (CH), continuous training standard diet (CTS), continuous training high-fat diet (CTH), interval training standard diet (ITS), and interval training high-fat diet (ITH). The interval and continuous training consisted of a swimming exercise performed over eight weeks. CH rats had greater body mass gain, sum of adipose tissues mass, and lower serum high density lipoprotein values than CS. The trained groups showed lower values of feed intake, caloric intake, body mass gain, and adiposity levels compared with the CH group. No significant differences were observed between the trained groups (CTS versus ITS and CTH versus ITH) on body mass gains and adiposity levels. In conclusion, both training methodologies were shown to be effective in controlling body mass gain and adiposity levels in high-fat diet fed rats. PMID:26904718

  11. Structural variability of E. coli thioredoxin captured in the crystal structures of single-point mutants

    NASA Astrophysics Data System (ADS)

    Noguera, Martín E.; Vazquez, Diego S.; Ferrer-Sueta, Gerardo; Agudelo, William A.; Howard, Eduardo; Rasia, Rodolfo M.; Manta, Bruno; Cousido-Siah, Alexandra; Mitschler, André; Podjarny, Alberto; Santos, Javier

    2017-02-01

    Thioredoxin is a ubiquitous small protein that catalyzes redox reactions of protein thiols. Additionally, thioredoxin from E. coli (EcTRX) is a widely-used model for structure-function studies. In a previous paper, we characterized several single-point mutants of the C-terminal helix (CTH) that alter global stability of EcTRX. However, spectroscopic signatures and enzymatic activity for some of these mutants were found essentially unaffected. A comprehensive structural characterization at the atomic level of these near-invariant mutants can provide detailed information about structural variability of EcTRX. We address this point through the determination of the crystal structures of four point-mutants, whose mutations occurs within or near the CTH, namely L94A, E101G, N106A and L107A. These structures are mostly unaffected compared with the wild-type variant. Notably, the E101G mutant presents a large region with two alternative traces for the backbone of the same chain. It represents a significant shift in backbone positions. Enzymatic activity measurements and conformational dynamics studies monitored by NMR and molecular dynamic simulations show that E101G mutation results in a small effect in the structural features of the protein. We hypothesize that these alternative conformations represent samples of the native-state ensemble of EcTRX, specifically the magnitude and location of conformational heterogeneity.

  12. Structural changes of the brain in relation to occupational stress.

    PubMed

    Savic, Ivanka

    2015-06-01

    Despite mounting reports about the negative effects of chronic occupational stress on cognitive functions, it is still uncertain whether and how this type of stress is associated with cerebral changes. This issue was addressed in the present MRI study, in which cortical thickness (Cth) and subcortical volumes were compared between 40 subjects reporting symptoms of chronic occupational stress (38 ± 6 years) and 40 matched controls (36 ± 6 years). The degree of perceived stress was measured with Maslach Burnout Inventory. In stressed subjects, there was a significant thinning of the mesial frontal cortex. When investigating the correlation between age and Cth, the thinning effect of age was more pronounced in the stressed group in the frontal cortex. Furthermore, their amygdala volumes were bilaterally increased (P = 0.020 and P = 0.003), whereas their caudate volumes were reduced (P = 0.040), and accompanied by impaired fine motor function. The perceived stress correlated positively with the amygdala volumes (r = 0.44, P = 0.04; r = 0.43, P = 04). Occupational stress was found to be associated with cortical thinning as well as with selective changes of subcortical volumes, with behavioral correlates. The findings support the hypothesis that stress-related excitotoxicity might be an underlying mechanism, and that the described condition is a stress related illness. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Thomson scattering diagnostic system design for the Compact Toroidal Hybrid experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Traverso, P. J., E-mail: pjt0002@auburn.edu; Maurer, D. A.; Ennis, D. A.

    2014-11-15

    A new Thomson scattering system using standard commercially available components has been designed for the non-axisymmetric plasmas of the Compact Toroidal Hybrid (CTH). The beam, generated by a frequency doubled Continuum PL DLS 2 J Nd:YAG laser, is passed vertically through an entrance Brewster window and an aperturing baffle system to minimize the stray laser light that could enter the collection optics. The beam line has been designed with an 8 m propagation distance to the mid-plane of the CTH device with the beam diameter kept less than 3 mm inside the plasma volume. The beam exits the vacuum systemmore » through another Brewster window and enters a beam dump, again to minimize the stray light in the vacuum chamber. Light collection, spectral processing, and signal detection are accomplished with an f/#∼ 1 aspheric lens, a commercially available Holospec f/1.8 spectrometer, and an Andor iStar DH740-18U-C3 image intensified camera. Spectral rejection of stray laser light, if needed, can be performed with the use of an optional interference filter at the spectrometer input. The system has been developed for initial single point measurements of plasmas with core electron temperatures of approximately 20–300 eV and densities of 5 × 10{sup 18} to 5 × 10{sup 19} m{sup −3} dependent upon operational scenario.« less

  14. Evaluation of multi-layer cloud detection based on MODIS CO2-slicing algorithm with CALIPSO-CloudSat measurements.

    NASA Astrophysics Data System (ADS)

    Viudez-Mora, A.; Kato, S.; Smith, W. L., Jr.; Chang, F. L.

    2016-12-01

    Knowledge of the vertical cloud distribution is important for a variety of climate and weather applications. The cloud overlapping variations greatly influence the atmospheric heating/cooling rates, with implications for the surface-troposphere radiative balance, global circulation and precipitation. Additionally, an accurate knowledge of the multi-layer cloud distribution in real-time can be used in applications such safety condition for aviation through storms and adverse weather conditions. In this study, we evaluate a multi-layered cloud algorithm (Chang et al. 2005) based on MODIS measurements aboard Aqua satellite (MCF). This algorithm uses the CO2-slicing technique combined with cloud properties determined from VIS, IR and NIR channels to locate high thin clouds over low-level clouds, and retrieve the τ of each layer. We use CALIPSO (Winker et. al, 2010) and CloudSat (Stephens et. al, 2002) (CLCS) derived cloud vertical profiles included in the C3M data product (Kato et al. 2010) to evaluate MCF derived multi-layer cloud properties. We focus on 2 layer overlapping and 1-layer clouds identified by the active sensors and investigate how well these systems are identified by the MODIS multi-layer technique. The results show that for these multi-layered clouds identified by CLCS, the MCF correctly identifies about 83% of the cases as multi-layer. However, it is found that the upper CTH is underestimated by about 2.6±0.4 km, because the CO2-slicing technique is not as sensitive to the cloud physical top as the CLCS. The lower CTH agree better with differences found to be about 1.2±0.5 km. Another outstanding issue for the MCF approach is the large number of multi-layer false alarms that occur in single-layer conditions. References: Chang, F.-L., and Z. Li, 2005: A new method for detection of cirrus overlapping water clouds and determination of their optical properties. J. Atmos. Sci., 62. Kato, S., et al. (2010), Relationships among cloud occurrence frequency, overlap, and effective thickness derived from CALIPSO and CloudSat merged cloud vertical profiles, J. Geophys. Res., 115. Stephens, G. L., et al. (2002), The CloudSat mission and A-Train, Bull. Am. Meteorol. Soc., 83. Winker, D. M., et al., 2010: The CALIPSO Mission: A global 3D view of aerosols and clouds. Bull. Amer. Meteor. Soc., 91.

  15. 29 CFR 1915.90 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 7 2013-07-01 2013-07-01 false Safety color code for marking physical hazards. 1915.90 Section 1915.90 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... General Working Conditions § 1915.90 Safety color code for marking physical hazards. The requirements...

  16. 29 CFR 1915.90 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 7 2014-07-01 2014-07-01 false Safety color code for marking physical hazards. 1915.90 Section 1915.90 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... General Working Conditions § 1915.90 Safety color code for marking physical hazards. The requirements...

  17. 29 CFR 1915.90 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 7 2012-07-01 2012-07-01 false Safety color code for marking physical hazards. 1915.90 Section 1915.90 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... General Working Conditions § 1915.90 Safety color code for marking physical hazards. The requirements...

  18. Thomson scattering diagnostic on the Compact Toroidal Hybrid Experiment

    NASA Astrophysics Data System (ADS)

    Traverso, P. J.; Ennis, D. A.; Hartwell, G. J.; Kring, J. D.; Maurer, D. A.

    2017-10-01

    A Thomson scattering system is being commissioned for the non-axisymmetric plasmas of the Compact Toroidal Hybrid (CTH), a five-field period current-carrying torsatron. The system takes a single point measurement at the magnetic axis to both calibrate the two-color soft x-ray Te system and serve as an additional diagnostic for the V3FIT 3D equilibrium reconstruction code. A single point measurement will reduce the uncertainty in the reconstructed peak pressure by an order of magnitude for both current-carrying plasmas and future gyrotron-heated stellarator plasmas. The beam, generated by a frequency doubled Continuum 2 J, Nd:YAG laser, is passed vertically through an entrance Brewster window and a two-aperture optical baffle system to minimize stray light. Thomson scattered light is collected by two adjacent f/2 plano-convex condenser lenses and routed via a fiber bundle through a Holospec f/1.8 spectrograph. The red-shifted scattered light from 533-563 nm will be collected by an array of Hamamatsu H11706-40 PMTs. The system has been designed to measure plasmas with core Te of 100 to 200 eV and densities of 5 ×1018 to 5 ×1019 m-3. Stray light and calibration data for a single wavelength channel will be presented. This work is supported by U.S. Department of Energy Grant No. DE-FG02-00ER54610.

  19. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 5 2013-07-01 2013-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii...

  20. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 5 2014-07-01 2014-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii...

  1. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 5 2012-07-01 2012-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii...

  2. Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics

    DOE PAGES

    Laney, Daniel; Langer, Steven; Weber, Christopher; ...

    2014-01-01

    This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. Wemore » compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.« less

  3. Software Tools for Stochastic Simulations of Turbulence

    DTIC Science & Technology

    2015-08-28

    client interface to FTI. Specefic client programs using this interface include the weather forecasting code WRF ; the high energy physics code, FLASH...client programs using this interface include the weather forecasting code WRF ; the high energy physics code, FLASH; and two locally constructed fluid...45 4.4.2.2 FLASH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.4.2.3 WRF

  4. Processing Motion: Using Code to Teach Newtonian Physics

    NASA Astrophysics Data System (ADS)

    Massey, M. Ryan

    Prior to instruction, students often possess a common-sense view of motion, which is inconsistent with Newtonian physics. Effective physics lessons therefore involve conceptual change. To provide a theoretical explanation for concepts and how they change, the triangulation model brings together key attributes of prototypes, exemplars, theories, Bayesian learning, ontological categories, and the causal model theory. The triangulation model provides a theoretical rationale for why coding is a viable method for physics instruction. As an experiment, thirty-two adolescent students participated in summer coding academies to learn how to design Newtonian simulations. Conceptual and attitudinal data was collected using the Force Concept Inventory and the Colorado Learning Attitudes about Science Survey. Results suggest that coding is an effective means for teaching Newtonian physics.

  5. Advanced Multi-Physics (AMP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philip, Bobby

    2012-06-01

    The Advanced Multi-Physics (AMP) code, in its present form, will allow a user to build a multi-physics application code for existing mechanics and diffusion operators and extend them with user-defined material models and new physics operators. There are examples that demonstrate mechanics, thermo-mechanics, coupled diffusion, and mechanical contact. The AMP code is designed to leverage a variety of mathematical solvers (PETSc, Trilinos, SUNDIALS, and AMP solvers) and mesh databases (LibMesh and AMP) in a consistent interchangeable approach.

  6. A methodology for the rigorous verification of plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  7. Psychopathy Moderates the Relationship between Orbitofrontal and Striatal Alterations and Violence: The Investigation of Individuals Accused of Homicide

    PubMed Central

    Lam, Bess Y. H.; Yang, Yaling; Schug, Robert A.; Han, Chenbo; Liu, Jianghong; Lee, Tatia M. C.

    2017-01-01

    Brain structural abnormalities in the orbitofrontal cortex (OFC) and striatum (caudate and putamen) have been observed in violent individuals. However, a uni-modal neuroimaging perspective has been used and prior findings have been mixed. The present study takes the multimodal structural brain imaging approaches to investigate the differential gray matter volumes (GMV) and cortical thickness (CTh) in the OFC and striatum between violent (accused of homicide) and non-violent (not accused of any violent crimes) individuals with different levels of psychopathic traits (interpersonal and unemotional qualities, factor 1 psychopathy and the expressions of antisocial disposition and impulsivity, factor 2 psychopathy). Structural Magnetic Resonance Imaging data, psychopathy and demographic information were assessed in sixty seven non-violent or violent adults. The results showed that the relationship between violence and the GMV in the right lateral OFC varied across different levels of the factor 1 psychopathy. At the subcortical level, the psychopathy level (the factor 1 psychopathy) moderated the positive relationship of violence with both left and right putamen GMV as well as left caudate GMV. Although the CTh findings were not significant, overall findings suggested that psychopathic traits moderated the relationship between violence and the brain structural morphology in the OFC and striatum. In conclusion, psychopathy takes upon a significant role in moderating violent behavior which gives insight to design and implement prevention measures targeting violent acts, thereby possibly mitigating their occurrence within the society. PMID:29249948

  8. Differential diagnosis of breast masses in South Korean premenopausal women using diffuse optical spectroscopic imaging

    NASA Astrophysics Data System (ADS)

    Leproux, Anaïs; Kim, You Me; Min, Jun Won; McLaren, Christine E.; Chen, Wen-Pin; O'Sullivan, Thomas D.; Lee, Seung-ha; Chung, Phil-Sang; Tromberg, Bruce J.

    2016-07-01

    Young patients with dense breasts have a relatively low-positive biopsy rate for breast cancer (˜1 in 7). South Korean women have higher breast density than Westerners. We investigated the benefit of using a functional and metabolic imaging technique, diffuse optical spectroscopic imaging (DOSI), to help the standard of care imaging tools to distinguish benign from malignant lesions in premenopausal Korean women. DOSI uses near-infrared light to measure breast tissue composition by quantifying tissue concentrations of water (ctH2O), bulk lipid (ctLipid), deoxygenated (ctHHb), and oxygenated (ctHbO2) hemoglobin. DOSI spectral signatures specific to abnormal tissue and absent in healthy tissue were also used to form a malignancy index. This study included 19 premenopausal subjects (average age 41±9), corresponding to 11 benign and 10 malignant lesions. Elevated lesion to normal ratio of ctH2O, ctHHb, ctHbO2, total hemoglobin (THb=ctHHb+ctHbO2), and tissue optical index (ctHHb×ctH2O/ctLipid) were observed in the malignant lesions compared to the benign lesions (p<0.02). THb and malignancy index were the two best single predictors of malignancy, with >90% sensitivity and specificity. Malignant lesions showed significantly higher metabolism and perfusion than benign lesions. DOSI spectral features showed high discriminatory power for distinguishing malignant and benign lesions in dense breasts of the Korean population.

  9. The FLUKA Code: An Overview

    NASA Technical Reports Server (NTRS)

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; hide

    2006-01-01

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  10. Standard interface files and procedures for reactor physics codes, version III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carmichael, B.M.

    Standards and procedures for promoting the exchange of reactor physics codes are updated to Version-III status. Standards covering program structure, interface files, file handling subroutines, and card input format are included. The implementation status of the standards in codes and the extension of the standards to new code areas are summarized. (15 references) (auth)

  11. ALE3D: An Arbitrary Lagrangian-Eulerian Multi-Physics Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noble, Charles R.; Anderson, Andrew T.; Barton, Nathan R.

    ALE3D is a multi-physics numerical simulation software tool utilizing arbitrary-Lagrangian- Eulerian (ALE) techniques. The code is written to address both two-dimensional (2D plane and axisymmetric) and three-dimensional (3D) physics and engineering problems using a hybrid finite element and finite volume formulation to model fluid and elastic-plastic response of materials on an unstructured grid. As shown in Figure 1, ALE3D is a single code that integrates many physical phenomena.

  12. Physics Verification Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William

    The purpose of the verification project is to establish, through rigorous convergence analysis, that each ASC computational physics code correctly implements a set of physics models and algorithms (code verification); Evaluate and analyze the uncertainties of code outputs associated with the choice of temporal and spatial discretization (solution or calculation verification); and Develop and maintain the capability to expand and update these analyses on demand. This presentation describes project milestones.

  13. A cross-sectional survey to evaluate knowledge, attitudes and practices (KAP) regarding seasonal influenza vaccination among European travellers to resource-limited destinations

    PubMed Central

    2010-01-01

    Background Influenza is one of the most common vaccine-preventable diseases in travellers. By performing two cross-sectional questionnaire surveys during winter 2009 and winter 2010 among European travellers to resource-limited destinations, we aimed to investigate knowledge, attitudes and practices (KAP) regarding seasonal influenza vaccination. Methods Questionnaires were distributed in the waiting room to the visitors of the University of Zurich Centre for Travel' Health (CTH) in January and February 2009 and January 2010 prior to travel health counselling (CTH09 and CTH10). Questions included demographic data, travel-related characteristics and KAP regarding influenza vaccination. Data were analysed by using SPSS® version 14.0 for Windows. Differences in proportions were compared using the Chi-square test and the significance level was set at p ≤ 0.05. Predictors for seasonal and pandemic influenza vaccination were determined by multiple logistic regression analyses. Results With a response rate of 96.6%, 906 individuals were enrolled and 868 (92.5%) provided complete data. Seasonal influenza vaccination coverage was 13.7% (n = 119). Only 43 (14.2%) participants were vaccinated against pandemic influenza A/H1N1, mostly having received both vaccines simultaneously, the seasonal and pandemic one. Job-related purposes (44, 37%), age > 64 yrs (25, 21%) and recommendations of the family physician (27, 22.7%) were the most often reported reasons for being vaccinated. In the multiple logistic regression analyses of the pooled data increasing age (OR = 1.03, 95% CI 1.01 - 1.04), a business trip (OR = 0.39, 95% CI 0.17 - 0.92) and seasonal influenza vaccination in the previous winter seasons (OR = 12.91, 95% CI 8.09 - 20.58) were independent predictors for seasonal influenza vaccination in 2009 or 2010. Influenza vaccination recommended by the family doctor (327, 37.7%), travel to regions with known high risk of influenza (305, 35.1%), and influenza vaccination required for job purposes (233, 26.8%) were most frequently mentioned to consider influenza vaccination. Conclusions Risk perception and vaccination coverage concerning seasonal and pandemic influenza was very poor among travellers to resource-limited destinations when compared to traditional at-risk groups. Previous access to influenza vaccination substantially facilitated vaccinations in the subsequent year. Information strategies about influenza should be intensified and include health professionals, e.g. family physicians, travel medicine practitioners and business enterprises. PMID:20609230

  14. Shock compression response of highly reactive Ni + Al multilayered thin foils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, Sean C.; Thadhani, Naresh N., E-mail: naresh.thadhani@mse.gatech.edu

    2016-03-07

    The shock-compression response of Ni + Al multilayered thin foils is investigated using laser-accelerated thin-foil plate-impact experiments over the pressure range of 2 to 11 GPa. The foils contain alternating Ni and Al layers (parallel but not flat) of nominally 50 nm bilayer spacing. The goal is to determine the equation of state and shock-induced reactivity of these highly reactive fully dense thin-foil materials. The laser-accelerated thin-foil impact set-up involved combined use of photon-doppler-velocimetry to monitor the acceleration and impact velocity of an aluminum flyer, and VISAR interferometry was used to monitor the back free-surface velocity of the impacted Ni + Al multilayered target. The shock-compressionmore » response of the Ni + Al target foils was determined using experimentally measured parameters and impedance matching approach, with error bars identified considering systematic and experimental errors. Meso-scale CTH shock simulations were performed using real imported microstructures of the cross-sections of the multilayered Ni + Al foils to compute the Hugoniot response (assuming no reaction) for correlation with their experimentally determined equation of state. It was observed that at particle velocities below ∼150 m/s, the experimentally determined equation of state trend matches the CTH-predicted inert response and is consistent with the observed unreacted state of the recovered Ni + Al target foils from this velocity regime. At higher particle velocities, the experimentally determined equation of state deviates from the CTH-predicted inert response. A complete and self-sustained reaction is also seen in targets recovered from experiments performed at these higher particle velocities. The deviation in the measured equation of state, to higher shock speeds and expanded volumes, combined with the observation of complete reaction in the recovered multilayered foils, confirmed via microstructure characterization, is indicative of the occurrence of shock-induced chemical reaction occurring in the time-scale of the high-pressure state. TEM characterization of recovered shock-compressed (unreacted) Ni + Al multilayered foils exhibits distinct features of constituent mixing revealing jetted layers and inter-mixed regions. These features were primarily observed in the proximity of the undulations present in the alternating layers of the Ni + Al starting foils, suggesting the important role of such instabilities in promoting shock-induced intermetallic-forming reactions in the fully dense highly exothermic multilayered thin foils.« less

  15. Breaking the Code: The Creative Use of QR Codes to Market Extension Events

    ERIC Educational Resources Information Center

    Hill, Paul; Mills, Rebecca; Peterson, GaeLynn; Smith, Janet

    2013-01-01

    The use of smartphones has drastically increased in recent years, heralding an explosion in the use of QR codes. The black and white square barcodes that link the physical and digital world are everywhere. These simple codes can provide many opportunities to connect people in the physical world with many of Extension online resources. The…

  16. Noncoherent Physical-Layer Network Coding with FSK Modulation: Relay Receiver Design Issues

    DTIC Science & Technology

    2011-03-01

    222 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 59, NO. 9, SEPTEMBER 2011 2595 Noncoherent Physical-Layer Network Coding with FSK Modulation: Relay... noncoherent reception, channel estima- tion. I. INTRODUCTION IN the two-way relay channel (TWRC), a pair of sourceterminals exchange information...2011 4. TITLE AND SUBTITLE Noncoherent Physical-Layer Network Coding with FSK Modulation:Relay Receiver Design Issues 5a. CONTRACT NUMBER 5b

  17. Assessment of the prevailing physics codes: LEOPARD, LASER, and EPRI-CELL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lan, J.S.

    1981-01-01

    In order to analyze core performance and fuel management, it is necessary to verify reactor physics codes in great detail. This kind of work not only serves the purpose of understanding and controlling the characteristics of each code, but also ensures the reliability as codes continually change due to constant modifications and machine transfers. This paper will present the results of a comprehensive verification of three code packages - LEOPARD, LASER, and EPRI-CELL.

  18. Climate and Man (Selected Articles),

    DTIC Science & Technology

    1985-08-09

    6). Large Caucasus. (7). True altitude, m. a - Alibek; b - Sheki; c - Zakataly; d - Yevlakh; e - Kirovabad; f - Zurnabad; g -Shusha; h - Kedabek...b 10 8 a Ff. M., C m np Ro s a E. 11., q aa 1 4 . r7. 0nmTr pexpea- umwuoueILHKII TeppHT~pHHf MOCKoRCKolk oOAaCTH Ro *pH3HoAoro- KAH marH te - CKNM...MACHINE TRANSLATION FTD-ID(RS)T-1542-84 9 August 1985 MICROFICHE NR: FTD-85- C -Ooo647 CLIMATE AND MAN (Selected Articles) English pages: 171 Source: Klimat

  19. Some Mechanical and Ballistic Properties of Titanium and Titanium Alloys

    DTIC Science & Technology

    1950-03-07

    treated alloy steel armor, Justifies high expectations that titanium alloys may make excellent armor meterials . The corrosion resistant properties of...Fur* Metal Beat Treated -7-4 - -I-re Tensile Strength in pot 13,000 5 0,oo 230,000 203400 speifict Gravity 2.71 7.87 4.54 2.9 7.9 4.6 Stroe4cth-Vleight...solution of HCI: 50 parts by volume ECl-specific gravity 1.19 (37.6%) 50 parts by volume H2 0 2. Concentrated RIP: Hl-specific gravity 1.15 (14%) 3. 5

  20. Diet and Physical Activity Intervention Strategies for College Students

    PubMed Central

    Martinez, Yannica Theda S.; Harmon, Brook E.; Bantum, Erin O.; Strayhorn, Shaila

    2016-01-01

    Objectives To understand perceived barriers of a diverse sample of college students and their suggestions for interventions aimed at healthy eating, cooking, and physical activity. Methods Forty students (33% Asian American, 30% mixed ethnicity) were recruited. Six focus groups were audio-recorded, transcribed, and coded. Coding began with a priori codes, but allowed for additional codes to emerge. Analysis of questionnaires on participants’ dietary and physical activity practices and behaviors provided context for qualitative findings. Results Barriers included time, cost, facility quality, and intimidation. Tailoring towards a college student’s lifestyle, inclusion of hands-on skill building, and online support and resources were suggested strategies. Conclusions Findings provide direction for diet and physical activity interventions and policies aimed at college students. PMID:28480225

  1. ALICE: A non-LTE plasma atomic physics, kinetics and lineshape package

    NASA Astrophysics Data System (ADS)

    Hill, E. G.; Pérez-Callejo, G.; Rose, S. J.

    2018-03-01

    All three parts of an atomic physics, atomic kinetics and lineshape code, ALICE, are described. Examples of the code being used to model the emissivity and opacity of plasmas are discussed and interesting features of the code which build on the existing corpus of models are shown throughout.

  2. High-speed multi-frame laser Schlieren for visualization of explosive events

    NASA Astrophysics Data System (ADS)

    Clarke, S. A.; Murphy, M. J.; Landon, C. D.; Mason, T. A.; Adrian, R. J.; Akinci, A. A.; Martinez, M. E.; Thomas, K. A.

    2007-09-01

    High-Speed Multi-Frame Laser Schlieren is used for visualization of a range of explosive and non-explosive events. Schlieren is a well-known technique for visualizing shock phenomena in transparent media. Laser backlighting and a framing camera allow for Schlieren images with very short (down to 5 ns) exposure times, band pass filtering to block out explosive self-light, and 14 frames of a single explosive event. This diagnostic has been applied to several explosive initiation events, such as exploding bridgewires (EBW), Exploding Foil Initiators (EFI) (or slappers), Direct Optical Initiation (DOI), and ElectroStatic Discharge (ESD). Additionally, a series of tests have been performed on "cut-back" detonators with varying initial pressing (IP) heights. We have also used this Diagnostic to visualize a range of EBW, EFI, and DOI full-up detonators. The setup has also been used to visualize a range of other explosive events, such as explosively driven metal shock experiments and explosively driven microjets. Future applications to other explosive events such as boosters and IHE booster evaluation will be discussed. Finite element codes (EPIC, CTH) have been used to analyze the schlieren images to determine likely boundary or initial conditions to determine the temporal-spatial pressure profile across the output face of the detonator. These experiments are part of a phased plan to understand the evolution of detonation in a detonator from initiation shock through run to detonation to full detonation to transition to booster and booster detonation.

  3. Terminal ballistics of a reduced-mass penetrator. Final report, January 1990--December 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silsby, G.F.

    1996-07-01

    This report presents the results of an experimental program to examine the performance of a reduced-mass concept penetrator impacting semi-infinite rolled homogeneous armor (RHA) at normal incidence. The reduced-mass penetrator used in this program is a solid tungsten alloy rod with eight holes drilled parallel to its axis, equally spaced on a circle, with axes parallel to the rod axis. Its performance was contrasted with baseline data for length-to- diameter ratios (L/D) 4 and 5 solid tungsten alloy penetrators. Striking velocity was nominally 1.6 km/s. A determined effort to reduce the scatter in the data by analysis of collateral datamore » from the US Army Research Laboratory (ARL) and literature sources suggested only a rather weak influence of L/D on penetration even at L/Ds approaching 1 and provided a tentative relationship to remove the influence of target lateral edge effects. It tightened up the holed-out rod data enough to be able to conclude with a moderate degree of certainty that there was no improvement in penetration as suggested by a simplistic density law model. A companion work by Kimsey of ARL examines the performance of this novel penetrator concept computationally, using the Eulerian code CTH. His work explains the possible causes of reduced performance suggested by analysis by Zook and Frank of ARL, though with some relative improvement in performance at higher velocities.« less

  4. Dependency graph for code analysis on emerging architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shashkov, Mikhail Jurievich; Lipnikov, Konstantin

    Direct acyclic dependency (DAG) graph is becoming the standard for modern multi-physics codes.The ideal DAG is the true block-scheme of a multi-physics code. Therefore, it is the convenient object for insitu analysis of the cost of computations and algorithmic bottlenecks related to statistical frequent data motion and dymanical machine state.

  5. Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations

    NASA Technical Reports Server (NTRS)

    Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.

    2015-01-01

    Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.

  6. Porous Zirconium-Furandicarboxylate Microspheres for Efficient Redox Conversion of Biofuranics.

    PubMed

    Li, Hu; Liu, Xiaofang; Yang, Tingting; Zhao, Wenfeng; Saravanamurugan, Shunmugavel; Yang, Song

    2017-04-22

    Biofuranic compounds, typically derived from C 5 and C 6 carbohydrates, have been extensively studied as promising alternatives to chemicals based on fossil resources. The present work reports the simple assembly of biobased 2,5-furandicarboxylic acid (FDCA) with different metal ions to prepare a range of metal-FDCA hybrids under hydrothermal conditions. The hybrid materials were demonstrated to have porous structure and acid-base bifunctionality. Zr-FDCA-T, in particular, showed a microspheric structure, high thermostability (ca. 400 °C), average pore diameters of approximately 4.7 nm, large density, moderate strength of Lewis-base/acid centers (ca. 1.4 mmol g -1 ), and a small number of Brønsted-acid sites. This material afforded almost quantitative yields of biofuranic alcohols from the corresponding aldehydes under mild conditions through catalytic transfer hydrogenation (CTH). Isotopic 1 H NMR spectroscopy and kinetic studies verified that direct hydride transfer was the dominant pathway and rate-determining step of the CTH. Importantly, the Zr-FDCA-T microspheres could be recycled with no decrease in catalytic performance and little leaching of active sites. Moreover, good yields of C 5 (i.e., furfural) or C 4 products [i.e., maleic acid and 2(5H)-furanone] could be obtained from furfuryl alcohol without oxidation of the furan ring over these metal-FDCA hybrids. The content and ratio of Lewis-acid/base sites were demonstrated to dominantly affect the catalytic performance of these redox reactions. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Clouds over the summertime Sahara: an evaluation of Met Office retrievals from Meteosat Second Generation using airborne remote sensing

    NASA Astrophysics Data System (ADS)

    Kealy, John C.; Marenco, Franco; Marsham, John H.; Garcia-Carreras, Luis; Francis, Pete N.; Cooke, Michael C.; Hocking, James

    2017-05-01

    Novel methods of cloud detection are applied to airborne remote sensing observations from the unique Fennec aircraft dataset, to evaluate the Met Office-derived products on cloud properties over the Sahara based on the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) on-board the Meteosat Second Generation (MSG) satellite. Two cloud mask configurations are considered, as well as the retrievals of cloud-top height (CTH), and these products are compared to airborne cloud remote sensing products acquired during the Fennec campaign in June 2011 and June 2012. Most detected clouds (67 % of the total) have a horizontal extent that is smaller than a SEVIRI pixel (3 km × 3 km). We show that, when partially cloud-contaminated pixels are included, a match between the SEVIRI and aircraft datasets is found in 80 ± 8 % of the pixels. Moreover, under clear skies the datasets are shown to agree for more than 90 % of the pixels. The mean cloud field, derived from the satellite cloud mask acquired during the Fennec flights, shows that areas of high surface albedo and orography are preferred sites for Saharan cloud cover, consistent with published theories. Cloud-top height retrievals however show large discrepancies over the region, which are ascribed to limiting factors such as the cloud horizontal extent, the derived effective cloud amount, and the absorption by mineral dust. The results of the CTH analysis presented here may also have further-reaching implications for the techniques employed by other satellite applications facilities across the world.

  8. Cloud base and top heights in the Hawaiian region determined with satellite and ground-based measurements

    NASA Astrophysics Data System (ADS)

    Zhang, Chunxi; Wang, Yuqing; Lauer, Axel; Hamilton, Kevin; Xie, Feiqin

    2012-08-01

    We present a multi-year climatology of cloud-base-height (CBH), cloud-top-height (CTH), and trade wind inversion base height (TWIBH) for the Hawaiian region (18°N-22.5°N, 153.7°W-160.7°W). The new climatology is based on data from the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite (CALIPSO), the Constellation Observing System for Meteorology Ionosphere and Climate (COSMIC), ceilometer observations and radiosondes. The climatology reported here is well suited to evaluate climate model simulations and can serve as a reference state for studies of the impact of climate change on Hawaiian ecosystems. The averaged CBH from CALIPSO in the Hawaiian Region is 890 m. The mean CTH from CALIPSO is 2110 m, which is close to the mean TWIBH from COSMIC. For non-precipitating cases, the mean TWIBH at both Lihue and Hilo is close to 2000 m. For precipitating cases, the mean TWIBH is 2450 m and 2280 m at Hilo and Lihue, respectively. The potential cloud thickness (PCT) is defined as the difference between TWIBH and CBH and the mean PCT is several hundred meters thicker for precipitating than for the non-precipitating cases at both stations. We find that the PCT is more strongly correlated to the TWIBH than the CBH and that precipitation is unlikely to occur if the TWIBH is below 1500 m. The observed rainfall intensity is correlated to the PCT, i.e., thicker clouds are more likely to produce heavy rain.

  9. Recent improvements of reactor physics codes in MHI

    NASA Astrophysics Data System (ADS)

    Kosaka, Shinya; Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki

    2015-12-01

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO's Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.

  10. Recent improvements of reactor physics codes in MHI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosaka, Shinya, E-mail: shinya-kosaka@mhi.co.jp; Yamaji, Kazuya; Kirimura, Kazuki

    2015-12-31

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO’s Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipatedmore » transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.« less

  11. SHARP User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Y. Q.; Shemon, E. R.; Thomas, J. W.

    SHARP is an advanced modeling and simulation toolkit for the analysis of nuclear reactors. It is comprised of several components including physical modeling tools, tools to integrate the physics codes for multi-physics analyses, and a set of tools to couple the codes within the MOAB framework. Physics modules currently include the neutronics code PROTEUS, the thermal-hydraulics code Nek5000, and the structural mechanics code Diablo. This manual focuses on performing multi-physics calculations with the SHARP ToolKit. Manuals for the three individual physics modules are available with the SHARP distribution to help the user to either carry out the primary multi-physics calculationmore » with basic knowledge or perform further advanced development with in-depth knowledge of these codes. This manual provides step-by-step instructions on employing SHARP, including how to download and install the code, how to build the drivers for a test case, how to perform a calculation and how to visualize the results. Since SHARP has some specific library and environment dependencies, it is highly recommended that the user read this manual prior to installing SHARP. Verification tests cases are included to check proper installation of each module. It is suggested that the new user should first follow the step-by-step instructions provided for a test problem in this manual to understand the basic procedure of using SHARP before using SHARP for his/her own analysis. Both reference output and scripts are provided along with the test cases in order to verify correct installation and execution of the SHARP package. At the end of this manual, detailed instructions are provided on how to create a new test case so that user can perform novel multi-physics calculations with SHARP. Frequently asked questions are listed at the end of this manual to help the user to troubleshoot issues.« less

  12. Physical Activity and Influenza-Coded Outpatient Visits, a Population-Based Cohort Study

    PubMed Central

    Siu, Eric; Campitelli, Michael A.; Kwong, Jeffrey C.

    2012-01-01

    Background Although the benefits of physical activity in preventing chronic medical conditions are well established, its impacts on infectious diseases, and seasonal influenza in particular, are less clearly defined. We examined the association between physical activity and influenza-coded outpatient visits, as a proxy for influenza infection. Methodology/Principal Findings We conducted a cohort study of Ontario respondents to Statistics Canada’s population health surveys over 12 influenza seasons. We assessed physical activity levels through survey responses, and influenza-coded physician office and emergency department visits through physician billing claims. We used logistic regression to estimate the risk of influenza-coded outpatient visits during influenza seasons. The cohort comprised 114,364 survey respondents who contributed 357,466 person-influenza seasons of observation. Compared to inactive individuals, moderately active (OR 0.83; 95% CI 0.74–0.94) and active (OR 0.87; 95% CI 0.77–0.98) individuals were less likely to experience an influenza-coded visit. Stratifying by age, the protective effect of physical activity remained significant for individuals <65 years (active OR 0.86; 95% CI 0.75–0.98, moderately active: OR 0.85; 95% CI 0.74–0.97) but not for individuals ≥65 years. The main limitations of this study were the use of influenza-coded outpatient visits rather than laboratory-confirmed influenza as the outcome measure, the reliance on self-report for assessing physical activity and various covariates, and the observational study design. Conclusion/Significance Moderate to high amounts of physical activity may be associated with reduced risk of influenza for individuals <65 years. Future research should use laboratory-confirmed influenza outcomes to confirm the association between physical activity and influenza. PMID:22737242

  13. Status of LANL Efforts to Effectively Use Sequoia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nystrom, William David

    2015-05-14

    Los Alamos National Laboratory (LANL) is currently working on 3 new production applications, VPC, xRage, and Pagosa. VPIC was designed to be a 3D relativist, electromagnetic Particle-In-Cell code for plasma simulation. xRage, a 3D AMR mesh amd multi physics hydro code. Pagosa, is a 3D structured mesh and multi physics hydro code.

  14. Braiding by Majorana tracking and long-range CNOT gates with color codes

    NASA Astrophysics Data System (ADS)

    Litinski, Daniel; von Oppen, Felix

    2017-11-01

    Color-code quantum computation seamlessly combines Majorana-based hardware with topological error correction. Specifically, as Clifford gates are transversal in two-dimensional color codes, they enable the use of the Majoranas' non-Abelian statistics for gate operations at the code level. Here, we discuss the implementation of color codes in arrays of Majorana nanowires that avoid branched networks such as T junctions, thereby simplifying their realization. We show that, in such implementations, non-Abelian statistics can be exploited without ever performing physical braiding operations. Physical braiding operations are replaced by Majorana tracking, an entirely software-based protocol which appropriately updates the Majoranas involved in the color-code stabilizer measurements. This approach minimizes the required hardware operations for single-qubit Clifford gates. For Clifford completeness, we combine color codes with surface codes, and use color-to-surface-code lattice surgery for long-range multitarget CNOT gates which have a time overhead that grows only logarithmically with the physical distance separating control and target qubits. With the addition of magic state distillation, our architecture describes a fault-tolerant universal quantum computer in systems such as networks of tetrons, hexons, or Majorana box qubits, but can also be applied to nontopological qubit platforms.

  15. Symptomatic arrhythmias due to syringomyelia-induced severe autonomic dysfunction.

    PubMed

    Riedlbauchová, Lucie; Nedělka, Tomáš; Schlenker, Jakub

    2014-10-01

    Syringomyelia is characterized by cavity formation in the spinal cord, most often at C2-Th9 level. Clinical manifestation reflects extent and localization of the spinal cord injury. 20-year old woman was admitted for recurrent rest-related presyncopes with sudden manifestation. Paroxysms of sinus bradycardia with SA and AV blocks were repeatedly documented during symptoms. There was normal echocardiographic finding, (para) infectious etiology was not proved. Character of the ECG findings raised suspicion on neurogenic cause. Autonomic nervous system testing demonstrated abnormalities reflecting predominant sympathetic dysfunction. Suspicion on incipient myelopathy was subsequently confirmed by MRI, which discovered syringomyelia at Th5 level as the only pathology. A 52-year old man with hypotrophic quadruparesis resulting from perinatal brain injury was sent for 2-years lasting symptoms (sudden palpitation, sweating, muscle tightness, shaking) with progressive worsening. Symptoms occurred in association with sudden increase of sinus rhythm rate and blood pressure that were provoked by minimal physical activity. Presence of significant autonomic dysregulation with baroreflex hyperreactivity in orthostatic test and symptomatic postural orthostatic tachycardia with verticalization-associated hypertension were proved. MRI revealed syringomyelia at C7 and Th7 level affecting sympathetic centers at these levels. Sympathetic fibers dysfunction at C-Th spinal level may cause significant autonomic dysfunction with arrhythmic manifestation.

  16. Towards measuring the semantic capacity of a physical medium demonstrated with elementary cellular automata.

    PubMed

    Dittrich, Peter

    2018-02-01

    The organic code concept and its operationalization by molecular codes have been introduced to study the semiotic nature of living systems. This contribution develops further the idea that the semantic capacity of a physical medium can be measured by assessing its ability to implement a code as a contingent mapping. For demonstration and evaluation, the approach is applied to a formal medium: elementary cellular automata (ECA). The semantic capacity is measured by counting the number of ways codes can be implemented. Additionally, a link to information theory is established by taking multivariate mutual information for quantifying contingency. It is shown how ECAs differ in their semantic capacities, how this is related to various ECA classifications, and how this depends on how a meaning is defined. Interestingly, if the meaning should persist for a certain while, the highest semantic capacity is found in CAs with apparently simple behavior, i.e., the fixed-point and two-cycle class. Synergy as a predictor for a CA's ability to implement codes can only be used if context implementing codes are common. For large context spaces with sparse coding contexts synergy is a weak predictor. Concluding, the approach presented here can distinguish CA-like systems with respect to their ability to implement contingent mappings. Applying this to physical systems appears straight forward and might lead to a novel physical property indicating how suitable a physical medium is to implement a semiotic system. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Porting plasma physics simulation codes to modern computing architectures using the libmrc framework

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Abbott, Stephen

    2015-11-01

    Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source libmrc framework that has been used to modularize and port three plasma physics codes: The extended MHD code MRCv3 with implicit time integration and curvilinear grids; the OpenGGCM global magnetosphere model; and the particle-in-cell code PSC. libmrc consolidates basic functionality needed for simulations based on structured grids (I/O, load balancing, time integrators), and also introduces a parallel object model that makes it possible to maintain multiple implementations of computational kernels, on e.g. conventional processors and GPUs. It handles data layout conversions and enables us to port performance-critical parts of a code to a new architecture step-by-step, while the rest of the code can remain unchanged. We will show examples of the performance gains and some physics applications.

  18. VERAIn

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan

    2015-02-16

    CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less

  19. Research Prototype: Automated Analysis of Scientific and Engineering Semantics

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.; Follen, Greg (Technical Monitor)

    2001-01-01

    Physical and mathematical formulae and concepts are fundamental elements of scientific and engineering software. These classical equations and methods are time tested, universally accepted, and relatively unambiguous. The existence of this classical ontology suggests an ideal problem for automated comprehension. This problem is further motivated by the pervasive use of scientific code and high code development costs. To investigate code comprehension in this classical knowledge domain, a research prototype has been developed. The prototype incorporates scientific domain knowledge to recognize code properties (including units, physical, and mathematical quantity). Also, the procedure implements programming language semantics to propagate these properties through the code. This prototype's ability to elucidate code and detect errors will be demonstrated with state of the art scientific codes.

  20. PelePhysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-05-17

    PelePhysics is a suite of physics packages that provides functionality of use to reacting hydrodynamics CFD codes. The initial release includes an interface to reaction rate mechanism evaluation, transport coefficient evaluation, and a generalized equation of state (EOS) facility. Both generic evaluators and interfaces to code from externally available tools (Fuego for chemical rates, EGLib for transport coefficients) are provided.

  1. Humboldt Bay Wetlands Review and Baylands Analysis. Volume II. Base Information.

    DTIC Science & Technology

    1980-08-01

    WETLANDS REVIEW SAYLANDS ANALYSIS -Ioup" i OrCTh~g.I7 4I ’ ,j:4- k sL 1 I / ~* 4,,. ’, s’v SS / -0,1 SO 7 0- c s I - ~-.--~ vS I BOTTOM SEDIMENTS ~ PLATE...following legislation: Ch. 438, Stat . 1915; Ch. 187, Stat . of 1927; Ch. 225, Stat . of 1945; and Ch. 1086, Stat . of 1970. These grants were much more...1086, Stat . of 1970 (granting tidelands to Eureka) as follows: The Humboldt Bay Fund, with appropriations from state oil and gas revenues and from

  2. Encoded physics knowledge in checking codes for nuclear cross section libraries at Los Alamos

    NASA Astrophysics Data System (ADS)

    Parsons, D. Kent

    2017-09-01

    Checking procedures for processed nuclear data at Los Alamos are described. Both continuous energy and multi-group nuclear data are verified by locally developed checking codes which use basic physics knowledge and common-sense rules. A list of nuclear data problems which have been identified with help of these checking codes is also given.

  3. Estimating vertical profiles of water-cloud droplet effective radius from SWIR satellite measurements via a statistical model derived from CloudSat observations

    NASA Astrophysics Data System (ADS)

    Nagao, T. M.; Murakami, H.; Nakajima, T. Y.

    2017-12-01

    This study proposes an algorithm to estimate vertical profiles of cloud droplet effective radius (CDER-VP) for water clouds from shortwave infrared (SWIR) measurements of Himawari-8/AHI via a statistical model of CDER-VP derived from CloudSat observation. Several similar algorithms in previous studies utilize a spectral radiance matching on the assumption of simultaneous observations of CloudSat and Aqua/MODIS. However, our algorithm does not assume simultaneous observations with CloudSat. First, in advance, a database (DB) of CDER-VP is prepared by the following procedure: TOA radiances at 0.65, 2.3 and 10.4-μm bands of the AHI are simulated using CDER-VP and cloud optical depth vertical profile (COD-VP) contained in the CloudSat 2B-CWC-RVOD and 2B-TAU products. Cloud optical thickness (COT), Column-CDER and cloud top height (CTH) are retrieved from the simulated radiances using a traditional retrieval algorithm with vertically homogeneous cloud model (1-SWIR VHC method). The CDER-VP is added to the DB by using the COT and Column-CDER retrievals as a key of the DB. Then by using principal component (PC) analysis, up to three PC vectors of the CDER-VPs in the DB are extracted. Next, the algorithm retrieves CDER-VP from actual AHI measurements by the following procedure: First, COT, Column-CDER and CTH are retrieved from TOA radiances at 0.65, 2.3 and 10.4-μm bands of the AHI using by 1-SWIR VHC method. Then, the PC vectors of CDER-VP is fetched from the DB using the COT and Column-CDER retrievals as the key of the DB. Finally, using coefficients of the PC vectors of CDER-VP as variables for retrieval, CDER-VP, COT and CTH are retrieved from TOA radiances at 0.65, 1.6, 2.3, 3.9 and 10.4-μm bands of the AHI based on optimal estimation method with iterative radiative transfer calculation. The simulation result showed the CDER-VP retrieval errors were almost smaller than 3 - 4 μm. The CDER retrieval errors at the cloud base were almost larger than the others (e.g. CDER at cloud top), especially when COT and CDER was large. The tendency can be explained by less sensitivities of SWIRs to CDER at cloud base. Additionally, as a case study, this study will attempt to apply the algorithm to the AHI's high-frequency observations, and to interpret the time series of the CDER-VP retrievals in terms of temporal evolution of water clouds.

  4. Oblique impacts into low impedance layers

    NASA Astrophysics Data System (ADS)

    Stickle, A. M.; Schultz, P. H.

    2009-12-01

    Planetary impacts occur indiscriminately, in all locations and materials. Varied geologic settings can have significant effects on the impact process, including the coupling between the projectile and target, the final damage patterns and modes of deformation that occur. For example, marine impact craters are not identical to impacts directly into bedrock or into sedimentary materials, though many of the same fundamental processes occur. It is therefore important, especially when considering terrestrial impacts, to understand how a low impedance sedimentary layer over bedrock affects the deformation process during and after a hypervelocity impact. As a first step, detailed comparisons between impacts and hydrocode models were performed. Experiments performed at the NASA Ames Vertical Gun Range of oblique impacts into polymethylmethacrylate (PMMA) targets with low impedance layers were performed and compared to experiments of targets without low impedance layers, as well as to hydrocode models under identical conditions. Impact velocities ranged from 5 km/s to 5.6 km/s, with trajectories from 30 degrees to 90 degrees above the horizontal. High-speed imaging provided documentation of the sequence and location of failure due to impact, which was compared to theoretical models. Plasticine and ice were used to construct the low impedance layers. The combination of experiments and models reveals the modes of failure due to a hypervelocity impact. How such failure is manifested at large scales can present a challenge for hydrocodes. CTH models tend to overestimate the amount of damage occurring within the targets and have difficulties perfectly reproducing morphologies; nevertheless, they provide significant and useful information about the failure modes and style within the material. CTH models corresponding to the experiments allow interpretation of the underlying processes involved as well as provide a benchmark for the experimental analysis. The transparency of PMMA allows a clear view of failure patterns within the target, providing a 3D picture of the final damage, as well as damage formation and propagation. Secondly, PMMA has mechanical properties similar to those of brittle rocks in the upper crust, making it an appropriate material for comparison to geologic materials. An impact into a PMMA target with a one-projectile-diameter thick plasticine layer causes damage distinct from an impact into a PMMA target without a low impedance layer. The extent of the final damage is much less in the target with the low impedance layer and begins to form at later times, there is little to no crater visible on the surface, and the formation and propagation of the damage is completely different, creating distinct subsurface damage patterns. Three-dimensional CTH hydrocode models show that the pressure history of material around and underneath the impact point is also different when a low impedance layer is present, leading to the variations in damage forming within the targets.

  5. nIFTY galaxy cluster simulations - III. The similarity and diversity of galaxies and subhaloes

    NASA Astrophysics Data System (ADS)

    Elahi, Pascal J.; Knebe, Alexander; Pearce, Frazer R.; Power, Chris; Yepes, Gustavo; Cui, Weiguang; Cunnama, Daniel; Kay, Scott T.; Sembolini, Federico; Beck, Alexander M.; Davé, Romeel; February, Sean; Huang, Shuiyao; Katz, Neal; McCarthy, Ian G.; Murante, Giuseppe; Perret, Valentin; Puchwein, Ewald; Saro, Alexandro; Teyssier, Romain

    2016-05-01

    We examine subhaloes and galaxies residing in a simulated Λ cold dark matter galaxy cluster (M^crit_{200}=1.1× 10^{15} h^{-1} M_{⊙}) produced by hydrodynamical codes ranging from classic smooth particle hydrodynamics (SPH), newer SPH codes, adaptive and moving mesh codes. These codes use subgrid models to capture galaxy formation physics. We compare how well these codes reproduce the same subhaloes/galaxies in gravity-only, non-radiative hydrodynamics and full feedback physics runs by looking at the overall subhalo/galaxy distribution and on an individual object basis. We find that the subhalo population is reproduced to within ≲10 per cent for both dark matter only and non-radiative runs, with individual objects showing code-to-code scatter of ≲0.1 dex, although the gas in non-radiative simulations shows significant scatter. Including feedback physics significantly increases the diversity. Subhalo mass and Vmax distributions vary by ≈20 per cent. The galaxy populations also show striking code-to-code variations. Although the Tully-Fisher relation is similar in almost all codes, the number of galaxies with 109 h- 1 M⊙ ≲ M* ≲ 1012 h- 1 M⊙ can differ by a factor of 4. Individual galaxies show code-to-code scatter of ˜0.5 dex in stellar mass. Moreover, systematic differences exist, with some codes producing galaxies 70 per cent smaller than others. The diversity partially arises from the inclusion/absence of active galactic nucleus feedback. Our results combined with our companion papers demonstrate that subgrid physics is not just subject to fine-tuning, but the complexity of building galaxies in all environments remains a challenge. We argue that even basic galaxy properties, such as stellar mass to halo mass, should be treated with errors bars of ˜0.2-0.4 dex.

  6. The small stellated dodecahedron code and friends.

    PubMed

    Conrad, J; Chamberland, C; Breuckmann, N P; Terhal, B M

    2018-07-13

    We explore a distance-3 homological CSS quantum code, namely the small stellated dodecahedron code, for dense storage of quantum information and we compare its performance with the distance-3 surface code. The data and ancilla qubits of the small stellated dodecahedron code can be located on the edges respectively vertices of a small stellated dodecahedron, making this code suitable for three-dimensional connectivity. This code encodes eight logical qubits into 30 physical qubits (plus 22 ancilla qubits for parity check measurements) in contrast with one logical qubit into nine physical qubits (plus eight ancilla qubits) for the surface code. We develop fault-tolerant parity check circuits and a decoder for this code, allowing us to numerically assess the circuit-based pseudo-threshold.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Authors.

  7. Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacFarlane, Joseph J.; Golovkin, I. E.; Woodruff, P. R.

    2009-08-07

    This Final Report summarizes work performed under DOE STTR Phase II Grant No. DE-FG02-05ER86258 during the project period from August 2006 to August 2009. The project, “Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments,” was led by Prism Computational Sciences (Madison, WI), and involved collaboration with subcontractors University of Nevada-Reno and Voss Scientific (Albuquerque, NM). In this project, we have: Developed and implemented a multi-dimensional, multi-frequency radiation transport model in the LSP hybrid fluid-PIC (particle-in-cell) code [1,2]. Updated the LSP code to support the use of accurate equation-of-state (EOS) tables generated by Prism’smore » PROPACEOS [3] code to compute more accurate temperatures in high energy density physics (HEDP) plasmas. Updated LSP to support the use of Prism’s multi-frequency opacity tables. Generated equation of state and opacity data for LSP simulations for several materials being used in plasma jet experimental studies. Developed and implemented parallel processing techniques for the radiation physics algorithms in LSP. Benchmarked the new radiation transport and radiation physics algorithms in LSP and compared simulation results with analytic solutions and results from numerical radiation-hydrodynamics calculations. Performed simulations using Prism radiation physics codes to address issues related to radiative cooling and ionization dynamics in plasma jet experiments. Performed simulations to study the effects of radiation transport and radiation losses due to electrode contaminants in plasma jet experiments. Updated the LSP code to generate output using NetCDF to provide a better, more flexible interface to SPECT3D [4] in order to post-process LSP output. Updated the SPECT3D code to better support the post-processing of large-scale 2-D and 3-D datasets generated by simulation codes such as LSP. Updated atomic physics modeling to provide for more comprehensive and accurate atomic databases that feed into the radiation physics modeling (spectral simulations and opacity tables). Developed polarization spectroscopy modeling techniques suitable for diagnosing energetic particle characteristics in HEDP experiments. A description of these items is provided in this report. The above efforts lay the groundwork for utilizing the LSP and SPECT3D codes in providing simulation support for DOE-sponsored HEDP experiments, such as plasma jet and fast ignition physics experiments. We believe that taken together, the LSP and SPECT3D codes have unique capabilities for advancing our understanding of the physics of these HEDP plasmas. Based on conversations early in this project with our DOE program manager, Dr. Francis Thio, our efforts emphasized developing radiation physics and atomic modeling capabilities that can be utilized in the LSP PIC code, and performing radiation physics studies for plasma jets. A relatively minor component focused on the development of methods to diagnose energetic particle characteristics in short-pulse laser experiments related to fast ignition physics. The period of performance for the grant was extended by one year to August 2009 with a one-year no-cost extension, at the request of subcontractor University of Nevada-Reno.« less

  8. Implicit time-integration method for simultaneous solution of a coupled non-linear system

    NASA Astrophysics Data System (ADS)

    Watson, Justin Kyle

    Historically large physical problems have been divided into smaller problems based on the physics involved. This is no different in reactor safety analysis. The problem of analyzing a nuclear reactor for design basis accidents is performed by a handful of computer codes each solving a portion of the problem. The reactor thermal hydraulic response to an event is determined using a system code like TRAC RELAP Advanced Computational Engine (TRACE). The core power response to the same accident scenario is determined using a core physics code like Purdue Advanced Core Simulator (PARCS). Containment response to the reactor depressurization in a Loss Of Coolant Accident (LOCA) type event is calculated by a separate code. Sub-channel analysis is performed with yet another computer code. This is just a sample of the computer codes used to solve the overall problems of nuclear reactor design basis accidents. Traditionally each of these codes operates independently from each other using only the global results from one calculation as boundary conditions to another. Industry's drive to uprate power for reactors has motivated analysts to move from a conservative approach to design basis accident towards a best estimate method. To achieve a best estimate calculation efforts have been aimed at coupling the individual physics models to improve the accuracy of the analysis and reduce margins. The current coupling techniques are sequential in nature. During a calculation time-step data is passed between the two codes. The individual codes solve their portion of the calculation and converge to a solution before the calculation is allowed to proceed to the next time-step. This thesis presents a fully implicit method of simultaneous solving the neutron balance equations, heat conduction equations and the constitutive fluid dynamics equations. It discusses the problems involved in coupling different physics phenomena within multi-physics codes and presents a solution to these problems. The thesis also outlines the basic concepts behind the nodal balance equations, heat transfer equations and the thermal hydraulic equations, which will be coupled to form a fully implicit nonlinear system of equations. The coupling of separate physics models to solve a larger problem and improve accuracy and efficiency of a calculation is not a new idea, however implementing them in an implicit manner and solving the system simultaneously is. Also the application to reactor safety codes is new and has not be done with thermal hydraulics and neutronics codes on realistic applications in the past. The coupling technique described in this thesis is applicable to other similar coupled thermal hydraulic and core physics reactor safety codes. This technique is demonstrated using coupled input decks to show that the system is solved correctly and then verified by using two derivative test problems based on international benchmark problems the OECD/NRC Three mile Island (TMI) Main Steam Line Break (MSLB) problem (representative of pressurized water reactor analysis) and the OECD/NRC Peach Bottom (PB) Turbine Trip (TT) benchmark (representative of boiling water reactor analysis).

  9. Coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    Analysis of quantum error correcting (QEC) codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. We present analytic results for the logical error as a function of concatenation level and code distance for coherent errors under the repetition code. For data-only coherent errors, we find that the logical error is partially coherent and therefore non-Pauli. However, the coherent part of the error is negligible after two or more concatenation levels or at fewer than ɛ - (d - 1) error correction cycles. Here ɛ << 1 is the rotation angle error per cycle for a single physical qubit and d is the code distance. These results support the validity of modeling coherent errors using a Pauli channel under some minimum requirements for code distance and/or concatenation. We discuss extensions to imperfect syndrome extraction and implications for general QEC.

  10. Effect of vitamin E on oxidative stress in the contralateral testis of neonatal and pubertal hemicastrated rats.

    PubMed

    Han, Woong Kyu; Jin, Mei Hua; Han, Sang Won

    2012-02-01

    To evaluate whether the antioxidant vitamin E can prevent the harmful effects of reactive oxidative stress (ROS) that occur during compensatory testicular hypertrophy (CTH). Thirty Sprague-Dawley rats were divided into six equal groups: neonatal hemicastrated vitamin E (NH_Vit E/NH) and sham surgical controls (NC), and pubertal hemicastrated vitamin E (PH_Vit E/PH) and sham surgical controls (PC). Vitamin E was administered orally to the NH_Vit E and PH_Vit E groups three times a week from week 3-12 prior to sacrifice. Antioxidant enzymes were measured in testis samples from each animal. Differences in superoxide dismutase activity were observed between the NH (21.04 ± 0.48) and NH_Vit E (22.62 ± 0.64) groups (P = 0.008); the PH (20.59 ± 0.11) and PC (20.91 ± 0.20) groups (P = 0.032); and the PH (20.59 ± 0.11) and PH_Vit E (22.32 ± 1.01) groups (P = 0.008). Thiobarbituric acid-reactive substance in the PH and PH_Vit E groups was 0.097 ± 0.022 and 0.036 ± 0.004 (P = 0.008), respectively; and in the NH and NH_Vit E groups it was 0.135 ± 0.02 and 0.039 ± 0.003 (P = 0.008), respectively. These results suggest that CTH is not associated with reducing oxidative injury, nor does it prevent ROS-induced cell damage. However, administration of vitamin E does reduce oxidative injury and prevent ROS-induced cell damage in a hemicastrated rat model. Copyright © 2010 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.

  11. Prediction of bone strength at the distal tibia by HR-pQCT and DXA.

    PubMed

    Popp, Albrecht W; Windolf, Markus; Senn, Christoph; Tami, Andrea; Richards, R Geoff; Brianza, Stefano; Schiuma, Damiano

    2012-01-01

    Areal bone mineral density (aBMD) at the distal tibia, measured at the epiphysis (T-EPI) and diaphysis (T-DIA), is predictive for fracture risk. Structural bone parameters evaluated at the distal tibia by high resolution peripheral quantitative computed tomography (HR-pQCT) displayed differences between healthy and fracture patients. With its simple geometry, T-DIA may allow investigating the correlation between bone structural parameter and bone strength. Anatomical tibiae were examined ex vivo by DXA (aBMD) and HR-pQCT (volumetric BMD (vBMD) and bone microstructural parameters). Cortical thickness (CTh) and polar moment of inertia (pMOI) were derived from DXA measurements. Finally, an index combining material (BMD) and mechanical property (polar moment of inertia, pMOI) was defined and analyzed for correlation with torque at failure and stiffness values obtained by biomechanical testing. Areal BMD predicted the vBMD at T-EPI and T-DIA. A high correlation was found between aBMD and microstructural parameters at T-EPIas well as between aBMD and CTh at T-DIA. Finally, at T-DIA both indexes combining BMD and pMOI were strongly and comparably correlated with torque at failure and bone stiffness. Ex vivo, at the distal tibial diaphysis, a novel index combining BMD and pMOI, which can be calculated directly from a single DXA measurement, predicted bone strength and stiffness better than either parameter alone and with an order of magnitude comparable to that of HR-pQCT. Whether this index is suitable for better prediction of fracture risk in vivo deserves further investigation. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Mechanisms of cerebellar tonsil herniation in patients with Chiari malformations as guide to clinical management

    PubMed Central

    Nishikawa, Misao; Kula, Roger W.; Dlugacz, Yosef D.

    2010-01-01

    Background The pathogenesis of Chiari malformations is incompletely understood. We tested the hypothesis that different etiologies have different mechanisms of cerebellar tonsil herniation (CTH), as revealed by posterior cranial fossa (PCF) morphology. Methods In 741 patients with Chiari malformation type I (CM-I) and 11 patients with Chiari malformation type II (CM-II), the size of the occipital enchondrium and volume of the PCF (PCFV) were measured on reconstructed 2D-CT and MR images of the skull. Measurements were compared with those in 80 age- and sex-matched healthy control individuals, and the results were correlated with clinical findings. Results Significant reductions of PCF size and volume were present in 388 patients with classical CM-I, 11 patients with CM-II, and five patients with CM-I and craniosynostosis. Occipital bone size and PCFV were normal in 225 patients with CM-I and occipitoatlantoaxial joint instability, 55 patients with CM-I and tethered cord syndrome (TCS), 30 patients with CM-I and intracranial mass lesions, and 28 patients with CM-I and lumboperitoneal shunts. Ten patients had miscellaneous etiologies. The size and area of the foramen magnum were significantly smaller in patients with classical CM-I and CM-I occurring with craniosynostosis and significantly larger in patients with CM-II and CM-I occurring with TCS. Conclusions Important clues concerning the pathogenesis of CTH were provided by morphometric measurements of the PCF. When these assessments were correlated with etiological factors, the following causal mechanisms were suggested: (1) cranial constriction; (2) cranial settling; (3) spinal cord tethering; (4) intracranial hypertension; and (5) intraspinal hypotension. PMID:20440631

  13. Estimates of the aerosol indirect effect over the Baltic Sea region derived from 12 years of MODIS observations

    NASA Astrophysics Data System (ADS)

    Saponaro, Giulia; Kolmonen, Pekka; Sogacheva, Larisa; Rodriguez, Edith; Virtanen, Timo; de Leeuw, Gerrit

    2017-02-01

    Retrieved from the Moderate Resolution Imaging Spectroradiometer (MODIS) on-board the Aqua satellite, 12 years (2003-2014) of aerosol and cloud properties were used to statistically quantify aerosol-cloud interaction (ACI) over the Baltic Sea region, including the relatively clean Fennoscandia and the more polluted central-eastern Europe. These areas allowed us to study the effects of different aerosol types and concentrations on macro- and microphysical properties of clouds: cloud effective radius (CER), cloud fraction (CF), cloud optical thickness (COT), cloud liquid water path (LWP) and cloud-top height (CTH). Aerosol properties used are aerosol optical depth (AOD), Ångström exponent (AE) and aerosol index (AI). The study was limited to low-level water clouds in the summer. The vertical distributions of the relationships between cloud properties and aerosols show an effect of aerosols on low-level water clouds. CF, COT, LWP and CTH tend to increase with aerosol loading, indicating changes in the cloud structure, while the effective radius of cloud droplets decreases. The ACI is larger at relatively low cloud-top levels, between 900 and 700 hPa. Most of the studied cloud variables were unaffected by the lower-tropospheric stability (LTS), except for the cloud fraction. The spatial distribution of aerosol and cloud parameters and ACI, here defined as the change in CER as a function of aerosol concentration for a fixed LWP, shows positive and statistically significant ACI over the Baltic Sea and Fennoscandia, with the former having the largest values. Small negative ACI values are observed in central-eastern Europe, suggesting that large aerosol concentrations saturate the ACI.

  14. Code Verification Capabilities and Assessments in Support of ASC V&V Level 2 Milestone #6035

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William; Budzien, Joanne Louise; Ferguson, Jim Michael

    This document provides a summary of the code verification activities supporting the FY17 Level 2 V&V milestone entitled “Deliver a Capability for V&V Assessments of Code Implementations of Physics Models and Numerical Algorithms in Support of Future Predictive Capability Framework Pegposts.” The physics validation activities supporting this milestone are documented separately. The objectives of this portion of the milestone are: 1) Develop software tools to support code verification analysis; 2) Document standard definitions of code verification test problems; and 3) Perform code verification assessments (focusing on error behavior of algorithms). This report and a set of additional standalone documents servemore » as the compilation of results demonstrating accomplishment of these objectives.« less

  15. MODTRAN6: a major upgrade of the MODTRAN radiative transfer code

    NASA Astrophysics Data System (ADS)

    Berk, Alexander; Conforti, Patrick; Kennett, Rosemary; Perkins, Timothy; Hawes, Frederick; van den Bosch, Jeannette

    2014-06-01

    The MODTRAN6 radiative transfer (RT) code is a major advancement over earlier versions of the MODTRAN atmospheric transmittance and radiance model. This version of the code incorporates modern software ar- chitecture including an application programming interface, enhanced physics features including a line-by-line algorithm, a supplementary physics toolkit, and new documentation. The application programming interface has been developed for ease of integration into user applications. The MODTRAN code has been restructured towards a modular, object-oriented architecture to simplify upgrades as well as facilitate integration with other developers' codes. MODTRAN now includes a line-by-line algorithm for high resolution RT calculations as well as coupling to optical scattering codes for easy implementation of custom aerosols and clouds.

  16. Optimization and parallelization of the thermal–hydraulic subchannel code CTF for high-fidelity multi-physics applications

    DOE PAGES

    Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.

    2014-11-23

    This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.

  17. Statistical physics inspired energy-efficient coded-modulation for optical communications.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2012-04-15

    Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America

  18. Space Applications of the FLUKA Monte-Carlo Code: Lunar and Planetary Exploration

    NASA Technical Reports Server (NTRS)

    Anderson, V.; Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Elkhayari, N.; Empl, A.; Fasso, A.; Ferrari, A.; hide

    2004-01-01

    NASA has recognized the need for making additional heavy-ion collision measurements at the U.S. Brookhaven National Laboratory in order to support further improvement of several particle physics transport-code models for space exploration applications. FLUKA has been identified as one of these codes and we will review the nature and status of this investigation as it relates to high-energy heavy-ion physics.

  19. A Continuum Diffusion Model for Viscoelastic Materials

    DTIC Science & Technology

    1988-11-01

    ZIP Code) 7b. ADDRESS (CJI. Slow, and ZIP Code) Mechanics Div isi on Office of Naval Research; Code 432 Collge Satio, T as 7843800 Quincy Ave. Collge ...these studies, which involved experimental, analytical, and materials science aspects, were conducted by researchers in the fields of physical and...thermodynamics, with irreversibility stemming from the foregoing variables yr through "growth laws" that correspond to viscous resistance. The physical ageing of

  20. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR CODING AND CODING VERIFICATION (HAND ENTRY) (UA-D-14.0)

    EPA Science Inventory

    The purpose of this SOP is to define the coding strategy for coding and coding verification of hand-entered data. It applies to the coding of all physical forms, especially those coded by hand. The strategy was developed for use in the Arizona NHEXAS project and the "Border" st...

  1. Exploring Physics with Computer Animation and PhysGL

    NASA Astrophysics Data System (ADS)

    Bensky, T. J.

    2016-10-01

    This book shows how the web-based PhysGL programming environment (http://physgl.org) can be used to teach and learn elementary mechanics (physics) using simple coding exercises. The book's theme is that the lessons encountered in such a course can be used to generate physics-based animations, providing students with compelling and self-made visuals to aid their learning. Topics presented are parallel to those found in a traditional physics text, making for straightforward integration into a typical lecture-based physics course. Users will appreciate the ease at which compelling OpenGL-based graphics and animations can be produced using PhysGL, as well as its clean, simple language constructs. The author argues that coding should be a standard part of lower-division STEM courses, and provides many anecdotal experiences and observations, that include observed benefits of the coding work.

  2. An Analysis of Naval Officer Student Academic Performance in the Operations Analysis Curriculum in Relationship to Academic Profile Codes and other Factors.

    DTIC Science & Technology

    1985-09-01

    Code 0 Physics (Calculus-Based) or Physical Science niscioline 0----------------------------------------- lR averaqe...opportunity for fficers with inadequate math- ematical and physical science backgrounds to establish a good math foundation to be able to gualify for a...technical curricu2um [Ref. 5: page 36]. There is also a six week refresher available that is designed to rapidly cover the calculus and physics

  3. Three-dimensional fuel pin model validation by prediction of hydrogen distribution in cladding and comparison with experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aly, A.; Avramova, Maria; Ivanov, Kostadin

    To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed bymore » data from hydrogen experiments and PIE data.« less

  4. The ZPIC educational code suite

    NASA Astrophysics Data System (ADS)

    Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.

    2017-10-01

    Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.

  5. The adaption and use of research codes for performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebetrau, A.M.

    1987-05-01

    Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less

  6. Digitized forensics: retaining a link between physical and digital crime scene traces using QR-codes

    NASA Astrophysics Data System (ADS)

    Hildebrandt, Mario; Kiltz, Stefan; Dittmann, Jana

    2013-03-01

    The digitization of physical traces from crime scenes in forensic investigations in effect creates a digital chain-of-custody and entrains the challenge of creating a link between the two or more representations of the same trace. In order to be forensically sound, especially the two security aspects of integrity and authenticity need to be maintained at all times. Especially the adherence to the authenticity using technical means proves to be a challenge at the boundary between the physical object and its digital representations. In this article we propose a new method of linking physical objects with its digital counterparts using two-dimensional bar codes and additional meta-data accompanying the acquired data for integration in the conventional documentation of collection of items of evidence (bagging and tagging process). Using the exemplary chosen QR-code as particular implementation of a bar code and a model of the forensic process, we also supply a means to integrate our suggested approach into forensically sound proceedings as described by Holder et al.1 We use the example of the digital dactyloscopy as a forensic discipline, where currently progress is being made by digitizing some of the processing steps. We show an exemplary demonstrator of the suggested approach using a smartphone as a mobile device for the verification of the physical trace to extend the chain-of-custody from the physical to the digital domain. Our evaluation of the demonstrator is performed towards the readability and the verification of its contents. We can read the bar code despite its limited size of 42 x 42 mm and rather large amount of embedded data using various devices. Furthermore, the QR-code's error correction features help to recover contents of damaged codes. Subsequently, our appended digital signature allows for detecting malicious manipulations of the embedded data.

  7. The GBS code for tokamak scrape-off layer simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halpern, F.D., E-mail: federico.halpern@epfl.ch; Ricci, P.; Jolliet, S.

    2016-06-15

    We describe a new version of GBS, a 3D global, flux-driven plasma turbulence code to simulate the turbulent dynamics in the tokamak scrape-off layer (SOL), superseding the code presented by Ricci et al. (2012) [14]. The present work is driven by the objective of studying SOL turbulent dynamics in medium size tokamaks and beyond with a high-fidelity physics model. We emphasize an intertwining framework of improved physics models and the computational improvements that allow them. The model extensions include neutral atom physics, finite ion temperature, the addition of a closed field line region, and a non-Boussinesq treatment of the polarizationmore » drift. GBS has been completely refactored with the introduction of a 3-D Cartesian communicator and a scalable parallel multigrid solver. We report dramatically enhanced parallel scalability, with the possibility of treating electromagnetic fluctuations very efficiently. The method of manufactured solutions as a verification process has been carried out for this new code version, demonstrating the correct implementation of the physical model.« less

  8. Enhanced Verification Test Suite for Physics Simulation Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamm, J R; Brock, J S; Brandon, S T

    2008-10-10

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest.more » This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of greater sophistication or other physics regimes (e.g., energetic material response, magneto-hydrodynamics), would represent a scientifically desirable complement to the fundamental test cases discussed in this report. The authors believe that this document can be used to enhance the verification analyses undertaken at the DOE WP Laboratories and, thus, to improve the quality, credibility, and usefulness of the simulation codes that are analyzed with these problems.« less

  9. Impact and damage of an armor composite

    NASA Astrophysics Data System (ADS)

    Resnyansky, A. D.; Parry, S.; Bourne, N. K.; Townsend, D.; James, B. J.

    2015-06-01

    The use of carbon fiber composites under shock and impact loading in aerospace, defense and automotive applications is increasingly important. Therefore prediction of the composite behavior and damage in these conditions is critical. Influence of anisotropy, fiber orientation and the rate of loading during the impact is considered in the present study and validated by comparison with experiments. The experiments deal with the plane, ballistic and Taylor impacts accompanied by high-speed photography observations and tomography of recovered samples. The CTH hydrocode is employed as the modeling platform with an advanced rate sensitive material model used for description of the deformation and damage of the transversely isotropic composite material.

  10. Study of no-man's land physics in the total-f gyrokinetic code XGC1

    NASA Astrophysics Data System (ADS)

    Ku, Seung Hoe; Chang, C. S.; Lang, J.

    2014-10-01

    While the ``transport shortfall'' in the ``no-man's land'' has been observed often in delta-f codes, it has not yet been observed in the global total-f gyrokinetic particle code XGC1. Since understanding the interaction between the edge and core transport appears to be a critical element in the prediction for ITER performance, understanding the no-man's land issue is an important physics research topic. Simulation results using the Holland case will be presented and the physics causing the shortfall phenomenon will be discussed. Nonlinear nonlocal interaction of turbulence, secondary flows, and transport appears to be the key.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    BRISC is a developmental prototype for a nextgeneration “systems-level” integrated performance and safety code (IPSC) for nuclear reactors. Its development served to demonstrate how a lightweight multi-physics coupling approach can be used to tightly couple the physics models in several different physics codes (written in a variety of languages) into one integrated package for simulating accident scenarios in a liquid sodium cooled “burner” nuclear reactor. For example, the RIO Fluid Flow and Heat transfer code developed at Sandia (SNL: Chris Moen, Dept. 08005) is used in BRISC to model fluid flow and heat transfer, as well as conduction heat transfermore » in solids. Because BRISC is a prototype, its most practical application is as a foundation or starting point for developing a true production code. The sub-codes and the associated models and correlations currently employed within BRISC were chosen to cover the required application space and demonstrate feasibility, but were not optimized or validated against experimental data within the context of their use in BRISC.« less

  12. ecode - Electron Transport Algorithm Testing v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene

    2016-10-05

    ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less

  13. Establishing confidence in complex physics codes: Art or science?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trucano, T.

    1997-12-31

    The ALEGRA shock wave physics code, currently under development at Sandia National Laboratories and partially supported by the US Advanced Strategic Computing Initiative (ASCI), is generic to a certain class of physics codes: large, multi-application, intended to support a broad user community on the latest generation of massively parallel supercomputer, and in a continual state of formal development. To say that the author has ``confidence`` in the results of ALEGRA is to say something different than that he believes that ALEGRA is ``predictive.`` It is the purpose of this talk to illustrate the distinction between these two concepts. The authormore » elects to perform this task in a somewhat historical manner. He will summarize certain older approaches to code validation. He views these methods as aiming to establish the predictive behavior of the code. These methods are distinguished by their emphasis on local information. He will conclude that these approaches are more art than science.« less

  14. Convolution Operations on Coding Metasurface to Reach Flexible and Continuous Controls of Terahertz Beams.

    PubMed

    Liu, Shuo; Cui, Tie Jun; Zhang, Lei; Xu, Quan; Wang, Qiu; Wan, Xiang; Gu, Jian Qiang; Tang, Wen Xuan; Qing Qi, Mei; Han, Jia Guang; Zhang, Wei Li; Zhou, Xiao Yang; Cheng, Qiang

    2016-10-01

    The concept of coding metasurface makes a link between physically metamaterial particles and digital codes, and hence it is possible to perform digital signal processing on the coding metasurface to realize unusual physical phenomena. Here, this study presents to perform Fourier operations on coding metasurfaces and proposes a principle called as scattering-pattern shift using the convolution theorem, which allows steering of the scattering pattern to an arbitrarily predesigned direction. Owing to the constant reflection amplitude of coding particles, the required coding pattern can be simply achieved by the modulus of two coding matrices. This study demonstrates that the scattering patterns that are directly calculated from the coding pattern using the Fourier transform have excellent agreements to the numerical simulations based on realistic coding structures, providing an efficient method in optimizing coding patterns to achieve predesigned scattering beams. The most important advantage of this approach over the previous schemes in producing anomalous single-beam scattering is its flexible and continuous controls to arbitrary directions. This work opens a new route to study metamaterial from a fully digital perspective, predicting the possibility of combining conventional theorems in digital signal processing with the coding metasurface to realize more powerful manipulations of electromagnetic waves.

  15. Scheduling observational and physical practice: influence on the coding of simple motor sequences.

    PubMed

    Ellenbuerger, Thomas; Boutin, Arnaud; Blandin, Yannick; Shea, Charles H; Panzer, Stefan

    2012-01-01

    The main purpose of the present experiment was to determine the coordinate system used in the development of movement codes when observational and physical practice are scheduled across practice sessions. The task was to reproduce a 1,300-ms spatial-temporal pattern of elbow flexions and extensions. An intermanual transfer paradigm with a retention test and two effector (contralateral limb) transfer tests was used. The mirror effector transfer test required the same pattern of homologous muscle activation and sequence of limb joint angles as that performed or observed during practice, and the non-mirror effector transfer test required the same spatial pattern movements as that performed or observed. The test results following the first acquisition session replicated the findings of Gruetzmacher, Panzer, Blandin, and Shea (2011) . The results following the second acquisition session indicated a strong advantage for participants who received physical practice in both practice sessions or received observational practice followed by physical practice. This advantage was found on both the retention and the mirror transfer tests compared to the non-mirror transfer test. These results demonstrate that codes based in motor coordinates can be developed relatively quickly and effectively for a simple spatial-temporal movement sequence when participants are provided with physical practice or observation followed by physical practice, but physical practice followed by observational practice or observational practice alone limits the development of codes based in motor coordinates.

  16. Impact Flash Physics: Modeling and Comparisons With Experimental Results

    NASA Astrophysics Data System (ADS)

    Rainey, E.; Stickle, A. M.; Ernst, C. M.; Schultz, P. H.; Mehta, N. L.; Brown, R. C.; Swaminathan, P. K.; Michaelis, C. H.; Erlandson, R. E.

    2015-12-01

    Hypervelocity impacts frequently generate an observable "flash" of light with two components: a short-duration spike due to emissions from vaporized material, and a long-duration peak due to thermal emissions from expanding hot debris. The intensity and duration of these peaks depend on the impact velocity, angle, and the target and projectile mass and composition. Thus remote sensing measurements of planetary impact flashes have the potential to constrain the properties of impacting meteors and improve our understanding of impact flux and cratering processes. Interpreting impact flash measurements requires a thorough understanding of how flash characteristics correlate with impact conditions. Because planetary-scale impacts cannot be replicated in the laboratory, numerical simulations are needed to provide this insight for the solar system. Computational hydrocodes can produce detailed simulations of the impact process, but they lack the radiation physics required to model the optical flash. The Johns Hopkins University Applied Physics Laboratory (APL) developed a model to calculate the optical signature from the hot debris cloud produced by an impact. While the phenomenology of the optical signature is understood, the details required to accurately model it are complicated by uncertainties in material and optical properties and the simplifications required to numerically model radiation from large-scale impacts. Comparisons with laboratory impact experiments allow us to validate our approach and to draw insight regarding processes that occur at all scales in impact events, such as melt generation. We used Sandia National Lab's CTH shock physics hydrocode along with the optical signature model developed at APL to compare with a series of laboratory experiments conducted at the NASA Ames Vertical Gun Range. The experiments used Pyrex projectiles to impact pumice powder targets with velocities ranging from 1 to 6 km/s at angles of 30 and 90 degrees with respect to horizontal. High-speed radiometer measurements were made of the time-dependent impact flash at wavelengths of 350-1100 nm. We will present comparisons between these measurements and the output of APL's model. The results of this validation allow us to determine basic relationships between observed optical signatures and impact conditions.

  17. Fundamental Studies in the Molecular Basis of Laser Induced Retinal Damage

    DTIC Science & Technology

    1988-01-01

    Cornell University School of Applied & Engineering Physics Ithaca, NY 14853 DOD DISTRIBUTION STATEMENT Approved for public release; distribution unlimited...Code) 7b. ADDRESS (City, State, and ZIP Code) School of Applied & Engineering Physics Ithaca, NY 14853 8a. NAME OF FUNDING/SPONSORING Bb. OFFICE SYMBOL

  18. Fundamental Studies in the Molecular Basis of Laser Induced Retinal Damage

    DTIC Science & Technology

    1988-01-01

    Cornell University .LECT l School of Applied & Engineering PhysicsIthaca, NY 14853 0 JAN 198D DOD DISTRIBUTION STATEMENT Approved for public release...State, and ZIP Code) 7b. ADDRESS (City, State, and ZIP Code) School of Applied & Engineering Physics Ithaca, NY 14853 Ba. NAME OF FUNDING/ SPONSORING

  19. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii... 29 Labor 5 2011-07-01 2011-07-01 false Safety color code for marking physical hazards. 1910.144 Section 1910.144 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH...

  20. Modeling coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  1. Numerical Studies of Impurities in Fusion Plasmas

    DOE R&D Accomplishments Database

    Hulse, R. A.

    1982-09-01

    The coupled partial differential equations used to describe the behavior of impurity ions in magnetically confined controlled fusion plasmas require numerical solution for cases of practical interest. Computer codes developed for impurity modeling at the Princeton Plasma Physics Laboratory are used as examples of the types of codes employed for this purpose. These codes solve for the impurity ionization state densities and associated radiation rates using atomic physics appropriate for these low-density, high-temperature plasmas. The simpler codes solve local equations in zero spatial dimensions while more complex cases require codes which explicitly include transport of the impurity ions simultaneously with the atomic processes of ionization and recombination. Typical applications are discussed and computational results are presented for selected cases of interest.

  2. ITS version 5.0 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theoristsmore » alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.« less

  3. Physical Model for the Evolution of the Genetic Code

    NASA Astrophysics Data System (ADS)

    Yamashita, Tatsuro; Narikiyo, Osamu

    2011-12-01

    Using the shape space of codons and tRNAs we give a physical description of the genetic code evolution on the basis of the codon capture and ambiguous intermediate scenarios in a consistent manner. In the lowest dimensional version of our description, a physical quantity, codon level is introduced. In terms of the codon levels two scenarios are typically classified into two different routes of the evolutional process. In the case of the ambiguous intermediate scenario we perform an evolutional simulation implemented cost selection of amino acids and confirm a rapid transition of the code change. Such rapidness reduces uncomfortableness of the non-unique translation of the code at intermediate state that is the weakness of the scenario. In the case of the codon capture scenario the survival against mutations under the mutational pressure minimizing GC content in genomes is simulated and it is demonstrated that cells which experience only neutral mutations survive.

  4. Physical-layer network coding in coherent optical OFDM systems.

    PubMed

    Guan, Xun; Chan, Chun-Kit

    2015-04-20

    We present the first experimental demonstration and characterization of the application of optical physical-layer network coding in coherent optical OFDM systems. It combines two optical OFDM frames to share the same link so as to enhance system throughput, while individual OFDM frames can be recovered with digital signal processing at the destined node.

  5. Processing module operating methods, processing modules, and communications systems

    DOEpatents

    McCown, Steven Harvey; Derr, Kurt W.; Moore, Troy

    2014-09-09

    A processing module operating method includes using a processing module physically connected to a wireless communications device, requesting that the wireless communications device retrieve encrypted code from a web site and receiving the encrypted code from the wireless communications device. The wireless communications device is unable to decrypt the encrypted code. The method further includes using the processing module, decrypting the encrypted code, executing the decrypted code, and preventing the wireless communications device from accessing the decrypted code. Another processing module operating method includes using a processing module physically connected to a host device, executing an application within the processing module, allowing the application to exchange user interaction data communicated using a user interface of the host device with the host device, and allowing the application to use the host device as a communications device for exchanging information with a remote device distinct from the host device.

  6. High-fidelity plasma codes for burn physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooley, James; Graziani, Frank; Marinak, Marty

    Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental datamore » and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.« less

  7. Selection of a computer code for Hanford low-level waste engineered-system performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGrail, B.P.; Mahoney, L.A.

    Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected tomore » affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites.« less

  8. DNA as a Binary Code: How the Physical Structure of Nucleotide Bases Carries Information

    ERIC Educational Resources Information Center

    McCallister, Gary

    2005-01-01

    The DNA triplet code also functions as a binary code. Because double-ring compounds cannot bind to double-ring compounds in the DNA code, the sequence of bases classified simply as purines or pyrimidines can encode for smaller groups of possible amino acids. This is an intuitive approach to teaching the DNA code. (Contains 6 figures.)

  9. Development of a new lattice physics code robin for PWR application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Chen, G.

    2013-07-01

    This paper presents a description of methodologies and preliminary verification results of a new lattice physics code ROBIN, being developed for PWR application at Shanghai NuStar Nuclear Power Technology Co., Ltd. The methods used in ROBIN to fulfill various tasks of lattice physics analysis are an integration of historical methods and new methods that came into being very recently. Not only these methods like equivalence theory for resonance treatment and method of characteristics for neutron transport calculation are adopted, as they are applied in many of today's production-level LWR lattice codes, but also very useful new methods like the enhancedmore » neutron current method for Dancoff correction in large and complicated geometry and the log linear rate constant power depletion method for Gd-bearing fuel are implemented in the code. A small sample of verification results are provided to illustrate the type of accuracy achievable using ROBIN. It is demonstrated that ROBIN is capable of satisfying most of the needs for PWR lattice analysis and has the potential to become a production quality code in the future. (authors)« less

  10. Extension of the XGC code for global gyrokinetic simulations in stellarator geometry

    NASA Astrophysics Data System (ADS)

    Cole, Michael; Moritaka, Toseo; White, Roscoe; Hager, Robert; Ku, Seung-Hoe; Chang, Choong-Seock

    2017-10-01

    In this work, the total-f, gyrokinetic particle-in-cell code XGC is extended to treat stellarator geometries. Improvements to meshing tools and the code itself have enabled the first physics studies, including single particle tracing and flux surface mapping in the magnetic geometry of the heliotron LHD and quasi-isodynamic stellarator Wendelstein 7-X. These have provided the first successful test cases for our approach. XGC is uniquely placed to model the complex edge physics of stellarators. A roadmap to such a global confinement modeling capability will be presented. Single particle studies will include the physics of energetic particles' global stochastic motions and their effect on confinement. Good confinement of energetic particles is vital for a successful stellarator reactor design. These results can be compared in the core region with those of other codes, such as ORBIT3d. In subsequent work, neoclassical transport and turbulence can then be considered and compared to results from codes such as EUTERPE and GENE. After sufficient verification in the core region, XGC will move into the stellarator edge region including the material wall and neutral particle recycling.

  11. Light element opacities of astrophysical interest from ATOMIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colgan, J.; Kilcrease, D. P.; Magee, N. H. Jr.

    We present new calculations of local-thermodynamic-equilibrium (LTE) light element opacities from the Los Alamos ATOMIC code for systems of astrophysical interest. ATOMIC is a multi-purpose code that can generate LTE or non-LTE quantities of interest at various levels of approximation. Our calculations, which include fine-structure detail, represent a systematic improvement over previous Los Alamos opacity calculations using the LEDCOP legacy code. The ATOMIC code uses ab-initio atomic structure data computed from the CATS code, which is based on Cowan's atomic structure codes, and photoionization cross section data computed from the Los Alamos ionization code GIPPER. ATOMIC also incorporates a newmore » equation-of-state (EOS) model based on the chemical picture. ATOMIC incorporates some physics packages from LEDCOP and also includes additional physical processes, such as improved free-free cross sections and additional scattering mechanisms. Our new calculations are made for elements of astrophysical interest and for a wide range of temperatures and densities.« less

  12. WDEC: A Code for Modeling White Dwarf Structure and Pulsations

    NASA Astrophysics Data System (ADS)

    Bischoff-Kim, Agnès; Montgomery, Michael H.

    2018-05-01

    The White Dwarf Evolution Code (WDEC), written in Fortran, makes models of white dwarf stars. It is fast, versatile, and includes the latest physics. The code evolves hot (∼100,000 K) input models down to a chosen effective temperature by relaxing the models to be solutions of the equations of stellar structure. The code can also be used to obtain g-mode oscillation modes for the models. WDEC has a long history going back to the late 1960s. Over the years, it has been updated and re-packaged for modern computer architectures and has specifically been used in computationally intensive asteroseismic fitting. Generations of white dwarf astronomers and dozens of publications have made use of the WDEC, although the last true instrument paper is the original one, published in 1975. This paper discusses the history of the code, necessary to understand why it works the way it does, details the physics and features in the code today, and points the reader to where to find the code and a user guide.

  13. Model for intensity calculation in electron guns

    NASA Astrophysics Data System (ADS)

    Doyen, O.; De Conto, J. M.; Garnier, J. P.; Lefort, M.; Richard, N.

    2007-04-01

    The calculation of the current in an electron gun structure is one of the main investigations involved in the electron gun physics understanding. In particular, various simulation codes exist but often present some important discrepancies with experiments. Moreover, those differences cannot be reduced because of the lack of physical information in these codes. We present a simple physical three-dimensional model, valid for all kinds of gun geometries. This model presents a better precision than all the other simulation codes and models encountered and allows the real understanding of the electron gun physics. It is based only on the calculation of the Laplace electric field at the cathode, the use of the classical Child-Langmuir's current density, and a geometrical correction to this law. Finally, the intensity versus voltage characteristic curve can be precisely described with only a few physical parameters. Indeed, we have showed that only the shape of the electric field at the cathode without beam, and a distance of an equivalent infinite planar diode gap, govern mainly the electron gun current generation.

  14. 40 CFR 51.50 - What definitions apply to this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude data for point sources. The six codes and their definitions are: (1) Coordinate Data Source Code: The... physical piece of or a closely related set of equipment. The EPA's reporting format for a given inventory...

  15. Theoretical atomic physics code development I: CATS: Cowan Atomic Structure Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdallah, J. Jr.; Clark, R.E.H.; Cowan, R.D.

    An adaptation of R.D. Cowan's Atomic Structure program, CATS, has been developed as part of the Theoretical Atomic Physics (TAPS) code development effort at Los Alamos. CATS has been designed to be easy to run and to produce data files that can interface with other programs easily. The CATS produced data files currently include wave functions, energy levels, oscillator strengths, plane-wave-Born electron-ion collision strengths, photoionization cross sections, and a variety of other quantities. This paper describes the use of CATS. 10 refs.

  16. Final report on LDRD project : coupling strategies for multi-physics applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopkins, Matthew Morgan; Moffat, Harry K.; Carnes, Brian

    Many current and future modeling applications at Sandia including ASC milestones will critically depend on the simultaneous solution of vastly different physical phenomena. Issues due to code coupling are often not addressed, understood, or even recognized. The objectives of the LDRD has been both in theory and in code development. We will show that we have provided a fundamental analysis of coupling, i.e., when strong coupling vs. a successive substitution strategy is needed. We have enabled the implementation of tighter coupling strategies through additions to the NOX and Sierra code suites to make coupling strategies available now. We have leveragedmore » existing functionality to do this. Specifically, we have built into NOX the capability to handle fully coupled simulations from multiple codes, and we have also built into NOX the capability to handle Jacobi Free Newton Krylov simulations that link multiple applications. We show how this capability may be accessed from within the Sierra Framework as well as from outside of Sierra. The critical impact from this LDRD is that we have shown how and have delivered strategies for enabling strong Newton-based coupling while respecting the modularity of existing codes. This will facilitate the use of these codes in a coupled manner to solve multi-physic applications.« less

  17. Simulation of Shear Alfvén Waves in LAPD using the BOUT++ code

    NASA Astrophysics Data System (ADS)

    Wei, Di; Friedman, B.; Carter, T. A.; Umansky, M. V.

    2011-10-01

    The linear and nonlinear physics of shear Alfvén waves is investigated using the 3D Braginskii fluid code BOUT++. The code has been verified against analytical calculations for the dispersion of kinetic and inertial Alfvén waves. Various mechanisms for forcing Alfvén waves in the code are explored, including introducing localized current sources similar to physical antennas used in experiments. Using this foundation, the code is used to model nonlinear interactions among shear Alfvén waves in a cylindrical magnetized plasma, such as that found in the Large Plasma Device (LAPD) at UCLA. In the future this investigation will allow for examination of the nonlinear interactions between shear Alfvén waves in both laboratory and space plasmas in order to compare to predictions of MHD turbulence.

  18. Modern Teaching Methods in Physics with the Aid of Original Computer Codes and Graphical Representations

    ERIC Educational Resources Information Center

    Ivanov, Anisoara; Neacsu, Andrei

    2011-01-01

    This study describes the possibility and advantages of utilizing simple computer codes to complement the teaching techniques for high school physics. The authors have begun working on a collection of open source programs which allow students to compare the results and graphics from classroom exercises with the correct solutions and further more to…

  19. Physical models, cross sections, and numerical approximations used in MCNP and GEANT4 Monte Carlo codes for photon and electron absorbed fraction calculation.

    PubMed

    Yoriyaz, Hélio; Moralles, Maurício; Siqueira, Paulo de Tarso Dalledone; Guimarães, Carla da Costa; Cintra, Felipe Belonsi; dos Santos, Adimir

    2009-11-01

    Radiopharmaceutical applications in nuclear medicine require a detailed dosimetry estimate of the radiation energy delivered to the human tissues. Over the past years, several publications addressed the problem of internal dose estimate in volumes of several sizes considering photon and electron sources. Most of them used Monte Carlo radiation transport codes. Despite the widespread use of these codes due to the variety of resources and potentials they offered to carry out dose calculations, several aspects like physical models, cross sections, and numerical approximations used in the simulations still remain an object of study. Accurate dose estimate depends on the correct selection of a set of simulation options that should be carefully chosen. This article presents an analysis of several simulation options provided by two of the most used codes worldwide: MCNP and GEANT4. For this purpose, comparisons of absorbed fraction estimates obtained with different physical models, cross sections, and numerical approximations are presented for spheres of several sizes and composed as five different biological tissues. Considerable discrepancies have been found in some cases not only between the different codes but also between different cross sections and algorithms in the same code. Maximum differences found between the two codes are 5.0% and 10%, respectively, for photons and electrons. Even for simple problems as spheres and uniform radiation sources, the set of parameters chosen by any Monte Carlo code significantly affects the final results of a simulation, demonstrating the importance of the correct choice of parameters in the simulation.

  20. CHARRON: Code for High Angular Resolution of Rotating Objects in Nature

    NASA Astrophysics Data System (ADS)

    Domiciano de Souza, A.; Zorec, J.; Vakili, F.

    2012-12-01

    Rotation is one of the fundamental physical parameters governing stellar physics and evolution. At the same time, spectrally resolved optical/IR long-baseline interferometry has proven to be an important observing tool to measure many physical effects linked to rotation, in particular, stellar flattening, gravity darkening, differential rotation. In order to interpret the high angular resolution observations from modern spectro-interferometers, such as VLTI/AMBER and VEGA/CHARA, we have developed an interferometry-oriented numerical model: CHARRON (Code for High Angular Resolution of Rotating Objects in Nature). We present here the characteristics of CHARRON, which is faster (≃q10-30 s per model) and thus more adapted to model-fitting than the first version of the code presented by Domiciano de Souza et al. (2002).

  1. Simulating Coupling Complexity in Space Plasmas: First Results from a new code

    NASA Astrophysics Data System (ADS)

    Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.

    2005-12-01

    The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal mass ejection and interplanetary shock propagation model for the inner and outer heliosphere, including, at a test-particle level, wave-particle interactions and particle acceleration at traveling shock waves and compression regions. 3) To develop an advanced Geospace General Circulation Model (GGCM) capable of realistically modeling space weather events, in particular the interaction with CMEs and geomagnetic storms. Furthermore, by implementing scalable run-time supports and sophisticated off- and on-line prediction algorithms, we anticipate important advances in the development of automatic and intelligent system software to optimize a wide variety of 'embedded' computations on parallel computers. Finally, public domain MHD and hydrodynamic codes had a transforming effect on space and astrophysics. We expect that our new generation, open source, public domain multi-scale code will have a similar transformational effect in a variety of disciplines, opening up new classes of problems to physicists and engineers alike.

  2. Comparison of Space Radiation Calculations from Deterministic and Monte Carlo Transport Codes

    NASA Technical Reports Server (NTRS)

    Adams, J. H.; Lin, Z. W.; Nasser, A. F.; Randeniya, S.; Tripathi, r. K.; Watts, J. W.; Yepes, P.

    2010-01-01

    The presentation outline includes motivation, radiation transport codes being considered, space radiation cases being considered, results for slab geometry, results from spherical geometry, and summary. ///////// main physics in radiation transport codes hzetrn uprop fluka geant4, slab geometry, spe, gcr,

  3. Clear cell and endometrioid carcinomas: are their differences attributable to distinct cells of origin?

    PubMed

    Cochrane, Dawn R; Tessier-Cloutier, Basile; Lawrence, Katherine M; Nazeran, Tayyebeh; Karnezis, Anthony N; Salamanca, Clara; Cheng, Angela S; McAlpine, Jessica N; Hoang, Lien N; Gilks, C Blake; Huntsman, David G

    2017-09-01

    Endometrial epithelium is the presumed tissue of origin for both eutopic and endometriosis-derived clear cell and endometrioid carcinomas. We had previously hypothesized that the morphological, biological and clinical differences between these carcinomas are due to histotype-specific mutations. Although some mutations and genomic landscape features are more likely to be found in one of these histotypes, we were not able to identify a single class of mutations that was exclusively present in one histotype and not the other. This lack of genomic differences led us to an alternative hypothesis that these cancers could arise from distinct cells of origin within endometrial tissue, and that it is the cellular context that accounts for their differences. In a proteomic screen, we identified cystathionine γ-lyase (CTH) as a marker for clear cell carcinoma, as it is expressed at high levels in clear cell carcinomas of the ovary and endometrium. In the current study, we analysed normal Müllerian tissues, and found that CTH is expressed in ciliated cells of endometrium (both eutopic endometrium and endometriosis) and fallopian tubes. We then demonstrated that other ciliated cell markers are expressed in clear cell carcinomas, whereas endometrial secretory cell markers are expressed in endometrioid carcinomas. The same differential staining of secretory and ciliated cells was demonstrable in a three-dimensional organoid culture system, in which stem cells were stimulated to differentiate into an admixture of secretory and ciliated cells. These data suggest that endometrioid carcinomas are derived from cells of the secretory cell lineage, whereas clear cell carcinomas are derived from, or have similarities to, cells of the ciliated cell lineage. Copyright © 2017 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd. Copyright © 2017 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.

  4. The cortical damage, early relapses, and onset of the progressive phase in multiple sclerosis.

    PubMed

    Scalfari, Antonio; Romualdi, Chiara; Nicholas, Richard S; Mattoscio, Miriam; Magliozzi, Roberta; Morra, Aldo; Monaco, Salvatore; Muraro, Paolo A; Calabrese, Massimiliano

    2018-05-16

    To investigate the relationship among cortical radiologic changes, the number of early relapses (ERs), and the long-term course of multiple sclerosis (MS). In this cohort study, we assessed the number of cortical lesions (CLs) and white matter (WM) lesions and the cortical thickness (Cth) at clinical onset and after 7.9 mean years among 219 patients with relapsing remitting (RR) MS with 1 (Low-ER), 2 (Mid-ER), and ≥3 (High-ER) ERs during the first 2 years. Kaplan-Meier and Cox regression analyses investigated early factors influencing the risk of secondary progressive (SP) MS. Fifty-nine patients (27%) converted to SPMS in 6.1 mean years. A larger number of CLs at onset predicted a higher risk of SPMS (hazard ratio [HR] 2.16, 4.79, and 12.3 for 2, 5, and 7 CLs, respectively, p < 0.001) and shorter latency to progression. The High-ER compared to the Low-ER and Mid-ER groups had a larger volume of WM lesions and CLs at onset, accrued more CLs, experienced more severe cortical atrophy over time, and entered the SP phase more rapidly. In the multivariate model, older age at onset (HR 1.97, p < 0.001), a larger baseline CL (HR 2.21, p = 0.005) and WM lesion (HR 1.32, p = 0.03) volume, early changes of global Cth (HR 1.36, p = 0.03), and ≥3 ERs (HR 6.08, p < 0.001) independently predicted a higher probability of SP. Extensive cortical damage at onset is associated with florid inflammatory clinical activity and predisposes to a rapid occurrence of the progressive phase. Age at onset, the number of early attacks, and the extent of baseline focal cortical damage can identify groups at high risk of progression who may benefit from more active therapy. © 2018 American Academy of Neurology.

  5. Geometric and optical properties of cirrus clouds inferred from three-year ground-based lidar and CALIOP measurements over Seoul, Korea

    NASA Astrophysics Data System (ADS)

    Kim, Yumi; Kim, Sang-Woo; Kim, Man-Hae; Yoon, Soon-Chang

    2014-03-01

    This study examines cirrus cloud top and bottom heights (CTH and CBH, respectively) and the associated optical properties revealed by ground-based lidar in Seoul (SNU-L), Korea, and space-borne Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP), which were obtained during a three-year measurement period between July 2006 and June 2009. From two selected cases, we determined good agreement in CTH and CBH with cirrus cloud optical depth (COD) between ground-based lidar and space-borne CALIOP. In particular, CODs at a wavelength of 532 nm calculated from the three years of SNU-L and CALIOP measurements were 0.417 ± 0.394 and 0.425 ± 0.479, respectively. The fraction of COD lower than 0.1 was approximately 17% and 25% of the total SNU-L and CALIOP profiles, respectively, and approximately 50% of both lidar profiles were classified as sub-visual or optically thin such that COD was < 0.3. The mean depolarization ratio was estimated to be 0.30 ± 0.06 for SNU-L and 0.34 ± 0.08 for CALIOP. The monthly variation of CODs from SNU-L and CALIOP measurements was not distinct, whereas cirrus altitudes from both SNU-L and CALIOP showed distinct monthly variation. CALIOP observations showed that cirrus clouds reached the tropopause level in all months, whereas the up-looking SNU-L did not detect cirrus clouds near the tropopause in summer due to signal attenuation by underlying optically thick clouds. The cloud layer thickness (CLT) and COD showed a distinct linear relationship up to approximately 2 km of the CLT; however, the COD did not increase, but remained constant when the CLT was greater than 2.0 km. The ice crystal content, lidar signal attenuation, and the presence of multi-layered cirrus clouds may have contributed to this tendency.

  6. Quantitative Computed Tomography Ventriculography for Assessment and Monitoring of Hydrocephalus: A Pilot Study and Description of Method in Subarachnoid Hemorrhage.

    PubMed

    Multani, Jasjit Singh; Oermann, Eric Karl; Titano, Joseph; Mascitelli, Justin; Nicol, Kelly; Feng, Rui; Skovrlj, Branko; Pain, Margaret; Mocco, J D; Bederson, Joshua B; Costa, Anthony; Shrivastava, Raj

    2017-08-01

    There is no facile quantitative method for monitoring hydrocephalus (HCP). We propose quantitative computed tomography (CT) ventriculography (qCTV) as a novel computer vision tool for empirically assessing HCP in patients with subarachnoid hemorrhage (SAH). Twenty patients with SAH who were evaluated for ventriculoperitoneal shunt (VPS) placement were selected for inclusion. Ten patients with normal head computed tomography (CTH) findings were analyzed as negative controls. CTH scans were segmented both manually and automatically (by qCTV) to generate measures of ventricular volume. The median manually calculated ventricular volume was 36.1 cm 3 (interquartile range [IQR], 30-115 cm 3 ), which was similar to the median qCTV measured volume of 37.5 cm 3 (IQR, 32-118 cm 3 ) (P = 0.796). Patients undergoing VPS placement demonstrated an increase in median ventricular volume on qCTV from 21 cm 3 to 40 cm 3 on day T-2 and to 51 cm 3 by day 0, a change of 144%. This is in contrast to patients who did not require shunting, in whom median ventricular volume decreased from 16 cm 3 to 14 cm 3 on day T-2 and to 13 cm 3 by day 0, with an average overall volume decrease 19% (P = 0.001). The average change in ventricular volume predicted which patients would require VPS placement, successfully identifying 7 of 10 patients (P = 0.004). Using an optimized cutoff of a change in ventricular volume of 2.5 cm 3 identified all patients who went on to require VPS placement (10 of 10; P = 0.011). qCTV is a reliable means of quantifying ventricular volume and hydrocephalus. This technique offers a new tool for monitoring neurosurgical patients for hydrocephalus, and may be beneficial for use in future research studies, as well as in the routine care of patients with hydrocephalus. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2008-04-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a methodmore » for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.« less

  8. Report from the Integrated Modeling Panel at the Workshop on the Science of Ignition on NIF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marinak, M; Lamb, D

    2012-07-03

    This section deals with multiphysics radiation hydrodynamics codes used to design and simulate targets in the ignition campaign. These topics encompass all the physical processes they model, and include consideration of any approximations necessary due to finite computer resources. The section focuses on what developments would have the highest impact on reducing uncertainties in modeling most relevant to experimental observations. It considers how the ICF codes should be employed in the ignition campaign. This includes a consideration of how the experiments can be best structured to test the physical models the codes employ.

  9. Design Considerations of a Virtual Laboratory for Advanced X-ray Sources

    NASA Astrophysics Data System (ADS)

    Luginsland, J. W.; Frese, M. H.; Frese, S. D.; Watrous, J. J.; Heileman, G. L.

    2004-11-01

    The field of scientific computation has greatly advanced in the last few years, resulting in the ability to perform complex computer simulations that can predict the performance of real-world experiments in a number of fields of study. Among the forces driving this new computational capability is the advent of parallel algorithms, allowing calculations in three-dimensional space with realistic time scales. Electromagnetic radiation sources driven by high-voltage, high-current electron beams offer an area to further push the state-of-the-art in high fidelity, first-principles simulation tools. The physics of these x-ray sources combine kinetic plasma physics (electron beams) with dense fluid-like plasma physics (anode plasmas) and x-ray generation (bremsstrahlung). There are a number of mature techniques and software packages for dealing with the individual aspects of these sources, such as Particle-In-Cell (PIC), Magneto-Hydrodynamics (MHD), and radiation transport codes. The current effort is focused on developing an object-oriented software environment using the Rational© Unified Process and the Unified Modeling Language (UML) to provide a framework where multiple 3D parallel physics packages, such as a PIC code (ICEPIC), a MHD code (MACH), and a x-ray transport code (ITS) can co-exist in a system-of-systems approach to modeling advanced x-ray sources. Initial software design and assessments of the various physics algorithms' fidelity will be presented.

  10. FERRET data analysis code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmittroth, F.

    1979-09-01

    A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples.

  11. Full-scale testing and numerical modeling of a multistory masonry structure subjected to internal blast loading

    NASA Astrophysics Data System (ADS)

    Zapata, Brian Jarvis

    As military and diplomatic representatives of the United States are deployed throughout the world, they must frequently make use of local, existing facilities; it is inevitable that some of these will be load bearing unreinforced masonry (URM) structures. Although generally suitable for conventional design loads, load bearing URM presents a unique hazard, with respect to collapse, when exposed to blast loading. There is therefore a need to study the blast resistance of load bearing URM construction in order to better protect US citizens assigned to dangerous locales. To address this, the Department of Civil and Environmental Engineering at the University of North Carolina at Charlotte conducted three blast tests inside a decommissioned, coal-fired, power plant prior to its scheduled demolition. The power plant's walls were constructed of URM and provided an excellent opportunity to study the response of URM walls in-situ. Post-test analytical studies investigated the ability of existing blast load prediction methodologies to model the case of a cylindrical charge with a low height of burst. It was found that even for the relatively simple blast chamber geometries of these tests, simplified analysis methods predicted blast impulses with an average net error of 22%. The study suggested that existing simplified analysis methods would benefit from additional development to better predict blast loads from cylinders detonated near the ground's surface. A hydrocode, CTH, was also used to perform two and three-dimensional simulations of the blast events. In order to use the hydrocode, Jones Wilkins Lee (JWL) equation of state (EOS) coefficients were developed for the experiment's Unimax dynamite charges; a novel energy-scaling technique was developed which permits the derivation of new JWL coefficients from an existing coefficient set. The hydrocode simulations were able to simulate blast impulses with an average absolute error of 34.5%. Moreover, the hydrocode simulations provided highly resolved spatio-temporal blast loading data for subsequent structural simulations. Equivalent single-degree-of-freedom (ESDOF) structural response models were then used to predict the out-of-plane deflections of blast chamber walls. A new resistance function was developed which permits a URM wall to crack at any height; numerical methodologies were also developed to compute transformation factors required for use in the ESDOF method. When combined with the CTH derived blast loading predictions, the ESDOF models were able to predict out-of-plane deflections with reasonable accuracy. Further investigations were performed using finite element models constructed in LS-DYNA; the models used elastic elements combined with contacts possessing a tension/shear cutoff and the ability to simulate fracture energy release. Using the CTH predicted blast loads and carefully selected constitutive parameters, the LS-DYNA models were able to both qualitatively and quantitatively predict blast chamber wall deflections and damage patterns. Moreover, the finite element models suggested several modes of response which cannot be modeled by current ESDOF methods; the effect of these response modes on the accuracy of ESDOF predictions warrants further study.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zieb, Kristofer James Ekhart; Hughes, Henry Grady III; Xu, X. George

    The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. Here, this article discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models’ theories are included as well.

  13. Enhanced verification test suite for physics simulation codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamm, James R.; Brock, Jerry S.; Brandon, Scott T.

    2008-09-01

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations.

  14. Coupled Physics Environment (CouPE) library - Design, Implementation, and Release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahadevan, Vijay S.

    Over several years, high fidelity, validated mono-­physics solvers with proven scalability on peta-­scale architectures have been developed independently. Based on a unified component-­based architecture, these existing codes can be coupled with a unified mesh-­data backplane and a flexible coupling-­strategy-­based driver suite to produce a viable tool for analysts. In this report, we present details on the design decisions and developments on CouPE, an acronym that stands for Coupled Physics Environment that orchestrates a coupled physics solver through the interfaces exposed by MOAB array-­based unstructured mesh, both of which are part of SIGMA (Scalable Interfaces for Geometry and Mesh-­Based Applications) toolkit.more » The SIGMA toolkit contains libraries that enable scalable geometry and unstructured mesh creation and handling in a memory and computationally efficient implementation. The CouPE version being prepared for a full open-­source release along with updated documentation will contain several useful examples that will enable users to start developing their applications natively using the native MOAB mesh and couple their models to existing physics applications to analyze and solve real world problems of interest. An integrated multi-­physics simulation capability for the design and analysis of current and future nuclear reactor models is also being investigated as part of the NEAMS RPL, to tightly couple neutron transport, thermal-­hydraulics and structural mechanics physics under the SHARP framework. This report summarizes the efforts that have been invested in CouPE to bring together several existing physics applications namely PROTEUS (neutron transport code), Nek5000 (computational fluid-dynamics code) and Diablo (structural mechanics code). The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in order to reduce the overall numerical uncertainty while leveraging available computational resources. The design of CouPE along with motivations that led to implementation choices are also discussed. The first release of the library will be different from the current version of the code that integrates the components in SHARP and explanation on the need for forking the source base will also be provided. Enhancements in the functionality and improved user guides will be available as part of the release. CouPE v0.1 is scheduled for an open-­source release in December 2014 along with SIGMA v1.1 components that provide support for language-agnostic mesh loading, traversal and query interfaces along with scalable solution transfer of fields between different physics codes. The coupling methodology and software interfaces of the library are presented, along with verification studies on two representative fast sodium-­cooled reactor demonstration problems to prove the usability of the CouPE library.« less

  15. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    NASA Astrophysics Data System (ADS)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  16. Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas (GPS - TTBP) Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chame, Jacqueline

    2011-05-27

    The goal of this project is the development of the Gyrokinetic Toroidal Code (GTC) Framework and its applications to problems related to the physics of turbulence and turbulent transport in tokamaks,. The project involves physics studies, code development, noise effect mitigation, supporting computer science efforts, diagnostics and advanced visualizations, verification and validation. Its main scientific themes are mesoscale dynamics and non-locality effects on transport, the physics of secondary structures such as zonal flows, and strongly coherent wave-particle interaction phenomena at magnetic precession resonances. Special emphasis is placed on the implications of these themes for rho-star and current scalings and formore » the turbulent transport of momentum. GTC-TTBP also explores applications to electron thermal transport, particle transport; ITB formation and cross-cuts such as edge-core coupling, interaction of energetic particles with turbulence and neoclassical tearing mode trigger dynamics. Code development focuses on major initiatives in the development of full-f formulations and the capacity to simulate flux-driven transport. In addition to the full-f -formulation, the project includes the development of numerical collision models and methods for coarse graining in phase space. Verification is pursued by linear stability study comparisons with the FULL and HD7 codes and by benchmarking with the GKV, GYSELA and other gyrokinetic simulation codes. Validation of gyrokinetic models of ion and electron thermal transport is pursed by systematic stressing comparisons with fluctuation and transport data from the DIII-D and NSTX tokamaks. The physics and code development research programs are supported by complementary efforts in computer sciences, high performance computing, and data management.« less

  17. 48 CFR 52.204-7 - System for Award Management.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... for Award Manangement (JUL 2013) (a) Definitions. As used in this provision— Data Universal Numbering... information, including the DUNS number or the DUNS+4 number, the Contractor and Government Entity (CAGE) code... Zip Code. (iv) Company Mailing Address, City, State and Zip Code (if separate from physical). (v...

  18. 48 CFR 52.204-7 - System for Award Management.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... for Award Manangement (JUL 2013) (a) Definitions. As used in this provision— Data Universal Numbering... information, including the DUNS number or the DUNS+4 number, the Contractor and Government Entity (CAGE) code... Zip Code. (iv) Company Mailing Address, City, State and Zip Code (if separate from physical). (v...

  19. A novel graded density impactor

    NASA Astrophysics Data System (ADS)

    Winter, R. E.; Cotton, M.; Harris, E. J.; Chapman, D. J.; Eakins, D.

    2014-05-01

    Ramp loading using graded-density-impactors as flyers in gas-gun-driven plate impact experiments can yield new and useful information about the equation of state and the strength properties of the loaded material. Selective Laser Melting, an additive manufacture technique, was used to manufacture a graded density flyer, termed the "bed of nails" (BON). A 2 mm thick × 100 mm diameter solid disc of stainless steel formed a base for an array of tapered spikes of length 6 mm and spaced 1 mm apart. The two experiments to test the concept were performed at impact velocities of 900 m/s and 1100 m/s using the 100 mm gas gun at the Institute of Shock Physics at Imperial College, London. In each experiment a BON flyer was impacted onto a copper buffer plate which helped to smooth out perturbations in the wave profile. The ramp delivered to the copper buffer was in turn transmitted to three tantalum targets of thicknesses 3, 5 and 7 mm, which were mounted in contact with the back face of the copper. Heterodyne velocimetry was used to measure the velocity-time history, at the back faces of the tantalum discs. The wave profiles display a smooth increase in velocity over a period of ~2.5 us, with no indication of a shock jump. The measured profiles have been analysed to generate a stress strain curve for tantalum. The results have been compared with the predictions of the Sandia National Laboratories hydrocode, CTH.

  20. An efficient code for the simulation of nonhydrostatic stratified flow over obstacles

    NASA Technical Reports Server (NTRS)

    Pihos, G. G.; Wurtele, M. G.

    1981-01-01

    The physical model and computational procedure of the code is described in detail. The code is validated in tests against a variety of known analytical solutions from the literature and is also compared against actual mountain wave observations. The code will receive as initial input either mathematically idealized or discrete observational data. The form of the obstacle or mountain is arbitrary.

  1. HEPMath 1.4: A mathematica package for semi-automatic computations in high energy physics

    NASA Astrophysics Data System (ADS)

    Wiebusch, Martin

    2015-10-01

    This article introduces the Mathematica package HEPMath which provides a number of utilities and algorithms for High Energy Physics computations in Mathematica. Its functionality is similar to packages like FormCalc or FeynCalc, but it takes a more complete and extensible approach to implementing common High Energy Physics notations in the Mathematica language, in particular those related to tensors and index contractions. It also provides a more flexible method for the generation of numerical code which is based on new features for C code generation in Mathematica. In particular it can automatically generate Python extension modules which make the compiled functions callable from Python, thus eliminating the need to write any code in a low-level language like C or Fortran. It also contains seamless interfaces to LHAPDF, FeynArts, and LoopTools.

  2. Integrated modelling framework for short pulse high energy density physics experiments

    NASA Astrophysics Data System (ADS)

    Sircombe, N. J.; Hughes, S. J.; Ramsay, M. G.

    2016-03-01

    Modelling experimental campaigns on the Orion laser at AWE, and developing a viable point-design for fast ignition (FI), calls for a multi-scale approach; a complete description of the problem would require an extensive range of physics which cannot realistically be included in a single code. For modelling the laser-plasma interaction (LPI) we need a fine mesh which can capture the dispersion of electromagnetic waves, and a kinetic model for each plasma species. In the dense material of the bulk target, away from the LPI region, collisional physics dominates. The transport of hot particles generated by the action of the laser is dependent on their slowing and stopping in the dense material and their need to draw a return current. These effects will heat the target, which in turn influences transport. On longer timescales, the hydrodynamic response of the target will begin to play a role as the pressure generated from isochoric heating begins to take effect. Recent effort at AWE [1] has focussed on the development of an integrated code suite based on: the particle in cell code EPOCH, to model LPI; the Monte-Carlo electron transport code THOR, to model the onward transport of hot electrons; and the radiation hydrodynamics code CORVUS, to model the hydrodynamic response of the target. We outline the methodology adopted, elucidate on the advantages of a robustly integrated code suite compared to a single code approach, demonstrate the integrated code suite's application to modelling the heating of buried layers on Orion, and assess the potential of such experiments for the validation of modelling capability in advance of more ambitious HEDP experiments, as a step towards a predictive modelling capability for FI.

  3. Analysis of energy dissipation and deposition in elastic bodies impacting at hypervelocities

    NASA Technical Reports Server (NTRS)

    Medina, David F.; Allahdadi, Firooz A.

    1992-01-01

    A series of impact problems were analyzed using the Eulerian hydrocode CTH. The objective was to quantify the amount of energy dissipated locally by a projectile-infinite plate impact. A series of six impact problems were formulated such that the mass and speed of each projectile were varied in order to allow for increasing speed with constant kinetic energy. The properties and dimensions of the plate were the same for each projectile impact. The resulting response of the plate was analyzed for global Kinetic Energy, global momentum, and local maximum shear stress. The percentage of energy dissipated by the various hypervelocity impact phenomena appears as a relative change of shear stress at a point away from the impact in the plate.

  4. Waterborne Transportation Lines of the United States 1988

    DTIC Science & Technology

    1989-10-20

    14.01NONE 1 1 157 1 1 175.01 26.01 1.51 1 - I 1! EHtN4I IVTCCI SIPENSION$ S AY 1069 CAAC17Y| MICR- I I V I S I I L I I DIET | OPERATOR INET |COVE...ICAPACITYI HIGH- I I V t S S ( L I I DtSTf OPERATOR I PET !COOEILENGTN|MROTNI 1 1 I EST I CARGO HANDLING IS IOPERATING I ............................. .IGIS...I 1 195:"!! 2,,:C! ;:u! - I I low CThC14NATI1 197! 11575765 559 4A40 094! 11.4thCNE I 1 1 19S.I! 26.0 1.9! 1 - I I I I TO pets W155a 495!4A401 115.3i

  5. Implicit SPH v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kyungjoo; Parks, Michael L.; Perego, Mauro

    2016-11-09

    ISPH code is developed to solve multi-physics meso-scale flow problems using implicit SPH method. In particular, the code can provides solutions for incompressible, multi phase flow and electro-kinetic flows.

  6. Feasibility of self-correcting quantum memory and thermal stability of topological order

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshida, Beni, E-mail: rouge@mit.edu

    2011-10-15

    Recently, it has become apparent that the thermal stability of topologically ordered systems at finite temperature, as discussed in condensed matter physics, can be studied by addressing the feasibility of self-correcting quantum memory, as discussed in quantum information science. Here, with this correspondence in mind, we propose a model of quantum codes that may cover a large class of physically realizable quantum memory. The model is supported by a certain class of gapped spin Hamiltonians, called stabilizer Hamiltonians, with translation symmetries and a small number of ground states that does not grow with the system size. We show that themore » model does not work as self-correcting quantum memory due to a certain topological constraint on geometric shapes of its logical operators. This quantum coding theoretical result implies that systems covered or approximated by the model cannot have thermally stable topological order, meaning that systems cannot be stable against both thermal fluctuations and local perturbations simultaneously in two and three spatial dimensions. - Highlights: > We define a class of physically realizable quantum codes. > We determine their coding and physical properties completely. > We establish the connection between topological order and self-correcting memory. > We find they do not work as self-correcting quantum memory. > We find they do not have thermally stable topological order.« less

  7. PlasmaPy: initial development of a Python package for plasma physics

    NASA Astrophysics Data System (ADS)

    Murphy, Nicholas; Leonard, Andrew J.; Stańczak, Dominik; Haggerty, Colby C.; Parashar, Tulasi N.; Huang, Yu-Min; PlasmaPy Community

    2017-10-01

    We report on initial development of PlasmaPy: an open source community-driven Python package for plasma physics. PlasmaPy seeks to provide core functionality that is needed for the formation of a fully open source Python ecosystem for plasma physics. PlasmaPy prioritizes code readability, consistency, and maintainability while using best practices for scientific computing such as version control, continuous integration testing, embedding documentation in code, and code review. We discuss our current and planned capabilities, including features presently under development. The development roadmap includes features such as fluid and particle simulation capabilities, a Grad-Shafranov solver, a dispersion relation solver, atomic data retrieval methods, and tools to analyze simulations and experiments. We describe several ways to contribute to PlasmaPy. PlasmaPy has a code of conduct and is being developed under a BSD license, with a version 0.1 release planned for 2018. The success of PlasmaPy depends on active community involvement, so anyone interested in contributing to this project should contact the authors. This work was partially supported by the U.S. Department of Energy.

  8. Using Modern C++ Idiom for the Discretisation of Sets of Coupled Transport Equations in Numerical Plasma Physics

    NASA Astrophysics Data System (ADS)

    van Dijk, Jan; Hartgers, Bart; van der Mullen, Joost

    2006-10-01

    Self-consistent modelling of plasma sources requires a simultaneous treatment of multiple physical phenomena. As a result plasma codes have a high degree of complexity. And with the growing interest in time-dependent modelling of non-equilibrium plasma in three dimensions, codes tend to become increasingly hard to explain-and-maintain. As a result of these trends there has been an increased interest in the software-engineering and implementation aspects of plasma modelling in our group at Eindhoven University of Technology. In this contribution we will present modern object-oriented techniques in C++ to solve an old problem: that of the discretisation of coupled linear(ized) equations involving multiple field variables on ortho-curvilinear meshes. The `LinSys' code has been tailored to the transport equations that occur in transport physics. The implementation has been made both efficient and user-friendly by using modern idiom like expression templates and template meta-programming. Live demonstrations will be given. The code is available to interested parties; please visit www.dischargemodelling.org.

  9. Core Physics and Kinetics Calculations for the Fissioning Plasma Core Reactor

    NASA Technical Reports Server (NTRS)

    Butler, C.; Albright, D.

    2007-01-01

    Highly efficient, compact nuclear reactors would provide high specific impulse spacecraft propulsion. This analysis and numerical simulation effort has focused on the technical feasibility issues related to the nuclear design characteristics of a novel reactor design. The Fissioning Plasma Core Reactor (FPCR) is a shockwave-driven gaseous-core nuclear reactor, which uses Magneto Hydrodynamic effects to generate electric power to be used for propulsion. The nuclear design of the system depends on two major calculations: core physics calculations and kinetics calculations. Presently, core physics calculations have concentrated on the use of the MCNP4C code. However, initial results from other codes such as COMBINE/VENTURE and SCALE4a. are also shown. Several significant modifications were made to the ISR-developed QCALC1 kinetics analysis code. These modifications include testing the state of the core materials, an improvement to the calculation of the material properties of the core, the addition of an adiabatic core temperature model and improvement of the first order reactivity correction model. The accuracy of these modifications has been verified, and the accuracy of the point-core kinetics model used by the QCALC1 code has also been validated. Previously calculated kinetics results for the FPCR were described in the ISR report, "QCALC1: A code for FPCR Kinetics Model Feasibility Analysis" dated June 1, 2002.

  10. Inter-comparison of Dose Distributions Calculated by FLUKA, GEANT4, MCNP, and PHITS for Proton Therapy

    NASA Astrophysics Data System (ADS)

    Yang, Zi-Yi; Tsai, Pi-En; Lee, Shao-Chun; Liu, Yen-Chiang; Chen, Chin-Cheng; Sato, Tatsuhiko; Sheu, Rong-Jiun

    2017-09-01

    The dose distributions from proton pencil beam scanning were calculated by FLUKA, GEANT4, MCNP, and PHITS, in order to investigate their applicability in proton radiotherapy. The first studied case was the integrated depth dose curves (IDDCs), respectively from a 100 and a 226-MeV proton pencil beam impinging a water phantom. The calculated IDDCs agree with each other as long as each code employs 75 eV for the ionization potential of water. The second case considered a similar condition of the first case but with proton energies in a Gaussian distribution. The comparison to the measurement indicates the inter-code differences might not only due to different stopping power but also the nuclear physics models. How the physics parameter setting affect the computation time was also discussed. In the third case, the applicability of each code for pencil beam scanning was confirmed by delivering a uniform volumetric dose distribution based on the treatment plan, and the results showed general agreement between each codes, the treatment plan, and the measurement, except that some deviations were found in the penumbra region. This study has demonstrated that the selected codes are all capable of performing dose calculations for therapeutic scanning proton beams with proper physics settings.

  11. Standardized Definitions for Code Verification Test Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William

    This document contains standardized definitions for several commonly used code verification test problems. These definitions are intended to contain sufficient information to set up the test problem in a computational physics code. These definitions are intended to be used in conjunction with exact solutions to these problems generated using Exact- Pack, www.github.com/lanl/exactpack.

  12. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N., E-mail: zizin@adis.vver.kiae.ru

    2010-12-15

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit ofmore » the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singleton, Jr., Robert; Israel, Daniel M.; Doebling, Scott William

    For code verification, one compares the code output against known exact solutions. There are many standard test problems used in this capacity, such as the Noh and Sedov problems. ExactPack is a utility that integrates many of these exact solution codes into a common API (application program interface), and can be used as a stand-alone code or as a python package. ExactPack consists of python driver scripts that access a library of exact solutions written in Fortran or Python. The spatial profiles of the relevant physical quantities, such as the density, fluid velocity, sound speed, or internal energy, are returnedmore » at a time specified by the user. The solution profiles can be viewed and examined by a command line interface or a graphical user interface, and a number of analysis tools and unit tests are also provided. We have documented the physics of each problem in the solution library, and provided complete documentation on how to extend the library to include additional exact solutions. ExactPack’s code architecture makes it easy to extend the solution-code library to include additional exact solutions in a robust, reliable, and maintainable manner.« less

  14. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    NASA Astrophysics Data System (ADS)

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N.; Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.

    2010-12-01

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.

  15. The National Transport Code Collaboration Module Library

    NASA Astrophysics Data System (ADS)

    Kritz, A. H.; Bateman, G.; Kinsey, J.; Pankin, A.; Onjun, T.; Redd, A.; McCune, D.; Ludescher, C.; Pletzer, A.; Andre, R.; Zakharov, L.; Lodestro, L.; Pearlstein, L. D.; Jong, R.; Houlberg, W.; Strand, P.; Wiley, J.; Valanju, P.; John, H. St.; Waltz, R.; Mandrekas, J.; Mau, T. K.; Carlsson, J.; Braams, B.

    2004-12-01

    This paper reports on the progress in developing a library of code modules under the auspices of the National Transport Code Collaboration (NTCC). Code modules are high quality, fully documented software packages with a clearly defined interface. The modules provide a variety of functions, such as implementing numerical physics models; performing ancillary functions such as I/O or graphics; or providing tools for dealing with common issues in scientific programming such as portability of Fortran codes. Researchers in the plasma community submit code modules, and a review procedure is followed to insure adherence to programming and documentation standards. The review process is designed to provide added confidence with regard to the use of the modules and to allow users and independent reviews to validate the claims of the modules' authors. All modules include source code; clear instructions for compilation of binaries on a variety of target architectures; and test cases with well-documented input and output. All the NTCC modules and ancillary information, such as current standards and documentation, are available from the NTCC Module Library Website http://w3.pppl.gov/NTCC. The goal of the project is to develop a resource of value to builders of integrated modeling codes and to plasma physics researchers generally. Currently, there are more than 40 modules in the module library.

  16. (U) Ristra Next Generation Code Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hungerford, Aimee L.; Daniel, David John

    LANL’s Weapons Physics management (ADX) and ASC program office have defined a strategy for exascale-class application codes that follows two supportive, and mutually risk-mitigating paths: evolution for established codes (with a strong pedigree within the user community) based upon existing programming paradigms (MPI+X); and Ristra (formerly known as NGC), a high-risk/high-reward push for a next-generation multi-physics, multi-scale simulation toolkit based on emerging advanced programming systems (with an initial focus on data-flow task-based models exemplified by Legion [5]). Development along these paths is supported by the ATDM, IC, and CSSE elements of the ASC program, with the resulting codes forming amore » common ecosystem, and with algorithm and code exchange between them anticipated. Furthermore, solution of some of the more challenging problems of the future will require a federation of codes working together, using established-pedigree codes in partnership with new capabilities as they come on line. The role of Ristra as the high-risk/high-reward path for LANL’s codes is fully consistent with its role in the Advanced Technology Development and Mitigation (ATDM) sub-program of ASC (see Appendix C), in particular its emphasis on evolving ASC capabilities through novel programming models and data management technologies.« less

  17. Review of heavy charged particle transport in MCNP6.2

    NASA Astrophysics Data System (ADS)

    Zieb, K.; Hughes, H. G.; James, M. R.; Xu, X. G.

    2018-04-01

    The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. This paper discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models' theories are included as well.

  18. Review of Heavy Charged Particle Transport in MCNP6.2

    DOE PAGES

    Zieb, Kristofer James Ekhart; Hughes, Henry Grady III; Xu, X. George; ...

    2018-01-05

    The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. Here, this article discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models’ theories are included as well.

  19. Efficacy of physical activity interventions in post-natal populations: systematic review, meta-analysis and content coding of behaviour change techniques.

    PubMed

    Gilinsky, Alyssa Sara; Dale, Hannah; Robinson, Clare; Hughes, Adrienne R; McInnes, Rhona; Lavallee, David

    2015-01-01

    This systematic review and meta-analysis reports the efficacy of post-natal physical activity change interventions with content coding of behaviour change techniques (BCTs). Electronic databases (MEDLINE, CINAHL and PsychINFO) were searched for interventions published from January 1980 to July 2013. Inclusion criteria were: (i) interventions including ≥1 BCT designed to change physical activity behaviour, (ii) studies reporting ≥1 physical activity outcome, (iii) interventions commencing later than four weeks after childbirth and (iv) studies including participants who had given birth within the last year. Controlled trials were included in the meta-analysis. Interventions were coded using the 40-item Coventry, Aberdeen & London - Refined (CALO-RE) taxonomy of BCTs and study quality assessment was conducted using Cochrane criteria. Twenty studies were included in the review (meta-analysis: n = 14). Seven were interventions conducted with healthy inactive post-natal women. Nine were post-natal weight management studies. Two studies included women with post-natal depression. Two studies focused on improving general well-being. Studies in healthy populations but not for weight management successfully changed physical activity. Interventions increased frequency but not volume of physical activity or walking behaviour. Efficacious interventions always included the BCTs 'goal setting (behaviour)' and 'prompt self-monitoring of behaviour'.

  20. The influence of ageism, experience, and relationships with older adults on physical therapy students' perception of geriatrics.

    PubMed

    Blackwood, Jennifer; Sweet, Christina

    2017-01-01

    Increased exposure to geriatrics throughout a student's professional education has been reported to improve the desire to work in this area; however, factors that influence the perception of geriatric physical therapy may prohibit students from actively seeking those experiences. The purpose of this study was to examine the perceptions of geriatric physical therapy by first-year graduate physical therapy students. A qualitative case study research approach was performed. Three focus groups were completed using students enrolled in their second semester of a graduate-level physical therapy program. Dialogue was reviewed and coded by three raters. Twenty-five subcategories of open-coding terms were triangulated and grouped into 4 themes via axial coding. Four themes emerged: (1) ageism exists in health care, (2) personal and professional experiences serve as a framework for students' perception of geriatrics, (3) interpersonal relationships formed within geriatric practice are highly valued, and (4) additional contextual barriers exist in geriatrics. To meet the needs of a highly skilled geriatric workforce, students should participate in enhanced geriatric experiences in didactic coursework as well as within interprofessional geriatric clinics throughout their education.

  1. Code dependencies of pre-supernova evolution and nucleosynthesis in massive stars: evolution to the end of core helium burning

    DOE PAGES

    Jones, S.; Hirschi, R.; Pignatari, M.; ...

    2015-01-15

    We present a comparison of 15M ⊙ , 20M ⊙ and 25M ⊙ stellar models from three different codes|GENEC, KEPLER and MESA|and their nucleosynthetic yields. The models are calculated from the main sequence up to the pre-supernova (pre-SN) stage and do not include rotation. The GENEC and KEPLER models hold physics assumptions that are characteristic of the two codes. The MESA code is generally more flexible; overshooting of the convective core during the hydrogen and helium burning phases in MESA is chosen such that the CO core masses are consistent with those in the GENEC models. Full nucleosynthesis calculations aremore » performed for all models using the NuGrid post-processing tool MPPNP and the key energy-generating nuclear reaction rates are the same for all codes. We are thus able to highlight the key diferences between the models that are caused by the contrasting physics assumptions and numerical implementations of the three codes. A reasonable agreement is found between the surface abundances predicted by the models computed using the different codes, with GENEC exhibiting the strongest enrichment of H-burning products and KEPLER exhibiting the weakest. There are large variations in both the structure and composition of the models—the 15M ⊙ and 20M ⊙ in particular—at the pre-SN stage from code to code caused primarily by convective shell merging during the advanced stages. For example the C-shell abundances of O, Ne and Mg predicted by the three codes span one order of magnitude in the 15M ⊙ models. For the alpha elements between Si and Fe the differences are even larger. The s-process abundances in the C shell are modified by the merging of convective shells; the modification is strongest in the 15M ⊙ model in which the C-shell material is exposed to O-burning temperatures and the γ -process is activated. The variation in the s-process abundances across the codes is smallest in the 25M ⊙ models, where it is comparable to the impact of nuclear reaction rate uncertainties. In general the differences in the results from the three codes are due to their contrasting physics assumptions (e.g. prescriptions for mass loss and convection). The broadly similar evolution of the 25M ⊙ models gives us reassurance that different stellar evolution codes do produce similar results. For the 15M ⊙ and 20M ⊙ models, however, the different input physics and the interplay between the various convective zones lead to important differences in both the pre-supernova structure and nucleosynthesis predicted by the three codes. For the KEPLER models the core masses are different and therefore an exact match could not be expected.« less

  2. PlasmaPy: beginning a community developed Python package for plasma physics

    NASA Astrophysics Data System (ADS)

    Murphy, Nicholas A.; Huang, Yi-Min; PlasmaPy Collaboration

    2016-10-01

    In recent years, researchers in several disciplines have collaborated on community-developed open source Python packages such as Astropy, SunPy, and SpacePy. These packages provide core functionality, common frameworks for data analysis and visualization, and educational tools. We propose that our community begins the development of PlasmaPy: a new open source core Python package for plasma physics. PlasmaPy could include commonly used functions in plasma physics, easy-to-use plasma simulation codes, Grad-Shafranov solvers, eigenmode solvers, and tools to analyze both simulations and experiments. The development will include modern programming practices such as version control, embedding documentation in the code, unit tests, and avoiding premature optimization. We will describe early code development on PlasmaPy, and discuss plans moving forward. The success of PlasmaPy depends on active community involvement and a welcoming and inclusive environment, so anyone interested in joining this collaboration should contact the authors.

  3. Life is physics and chemistry and communication.

    PubMed

    Witzany, Guenther

    2015-04-01

    Manfred Eigen extended Erwin Schroedinger's concept of "life is physics and chemistry" through the introduction of information theory and cybernetic systems theory into "life is physics and chemistry and information." Based on this assumption, Eigen developed the concepts of quasispecies and hypercycles, which have been dominant in molecular biology and virology ever since. He insisted that the genetic code is not just used metaphorically: it represents a real natural language. However, the basics of scientific knowledge changed dramatically within the second half of the 20th century. Unfortunately, Eigen ignored the results of the philosophy of science discourse on essential features of natural languages and codes: a natural language or code emerges from populations of living agents that communicate. This contribution will look at some of the highlights of this historical development and the results relevant for biological theories about life. © 2014 New York Academy of Sciences.

  4. SU-A-210-02: Medical Physics Opportunities at the NRC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abogunde, M.

    The purpose of this student annual meeting is to address topics that are becoming more relevant to medical physicists, but are not frequently addressed, especially for students and trainees just entering the field. The talk is divided into two parts: medical billing and regulations. Hsinshun Wu – Why should we learn radiation oncology billing? Many medical physicists do not like to be involved with medical billing or coding during their career. They believe billing is not their responsibility and sometimes they even refuse to participate in the billing process if given the chance. This presentation will talk about a physicist’smore » long career and share his own experience that knowing medical billing is not only important and necessary for every young medical physicist, but that good billing knowledge could provide a valuable contribution to his/her medical physics development. Learning Objectives: The audience will learn the basic definition of Current Procedural Terminology (CPT) codes performed in a Radiation Oncology Department. Understand the differences between hospital coding and physician-based or freestanding coding. Apply proper CPT coding for each Radiation Oncology procedure. Each procedure with its specific CPT code will be discussed in detail. The talk will focus on the process of care and use of actual workflow to understand each CPT code. Example coding of a typical Radiation Oncology procedure. Special procedure coding such as brachytherapy, proton therapy, radiosurgery, and SBRT. Maryann Abogunde – Medical physics opportunities at the Nuclear Regulatory Commission (NRC) The NRC’s responsibilities include the regulation of medical uses of byproduct (radioactive) materials and oversight of medical use end-users (licensees) through a combination of regulatory requirements, licensing, safety oversight including inspection and enforcement, operational experience evaluation, and regulatory support activities. This presentation will explore the career options for medical physicists in the NRC, how the NRC interacts with clinical medical physicists, and a physicist’s experience as a regulator. Learning Objectives: Explore non-clinical career pathways for medical physics students and trainees at the Nuclear Regulatory Commission. Overview of NRC medical applications and medical use regulations. Understand the skills needed for physicists as regulators. Abogunde is funded to attend the meeting by her employer, the NRC.« less

  5. SU-A-210-00: AAPM Medical Physics Student Meeting: Medical Billing and Regulations: Everything You Always Wanted To Know, But Were Too Afraid To Ask

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The purpose of this student annual meeting is to address topics that are becoming more relevant to medical physicists, but are not frequently addressed, especially for students and trainees just entering the field. The talk is divided into two parts: medical billing and regulations. Hsinshun Wu – Why should we learn radiation oncology billing? Many medical physicists do not like to be involved with medical billing or coding during their career. They believe billing is not their responsibility and sometimes they even refuse to participate in the billing process if given the chance. This presentation will talk about a physicist’smore » long career and share his own experience that knowing medical billing is not only important and necessary for every young medical physicist, but that good billing knowledge could provide a valuable contribution to his/her medical physics development. Learning Objectives: The audience will learn the basic definition of Current Procedural Terminology (CPT) codes performed in a Radiation Oncology Department. Understand the differences between hospital coding and physician-based or freestanding coding. Apply proper CPT coding for each Radiation Oncology procedure. Each procedure with its specific CPT code will be discussed in detail. The talk will focus on the process of care and use of actual workflow to understand each CPT code. Example coding of a typical Radiation Oncology procedure. Special procedure coding such as brachytherapy, proton therapy, radiosurgery, and SBRT. Maryann Abogunde – Medical physics opportunities at the Nuclear Regulatory Commission (NRC) The NRC’s responsibilities include the regulation of medical uses of byproduct (radioactive) materials and oversight of medical use end-users (licensees) through a combination of regulatory requirements, licensing, safety oversight including inspection and enforcement, operational experience evaluation, and regulatory support activities. This presentation will explore the career options for medical physicists in the NRC, how the NRC interacts with clinical medical physicists, and a physicist’s experience as a regulator. Learning Objectives: Explore non-clinical career pathways for medical physics students and trainees at the Nuclear Regulatory Commission. Overview of NRC medical applications and medical use regulations. Understand the skills needed for physicists as regulators. Abogunde is funded to attend the meeting by her employer, the NRC.« less

  6. [Convergent origin of repeats in genes coding for globular proteins. An analysis of the factors determining the presence of inverted and symmetrical repeats].

    PubMed

    Solov'ev, V V; Kel', A E; Kolchanov, N A

    1989-01-01

    The factors, determining the presence of inverted and symmetrical repeats in genes coding for globular proteins, have been analysed. An interesting property of genetical code has been revealed in the analysis of symmetrical repeats: the pairs of symmetrical codons corresponded to pairs of amino acids with mostly similar physical-chemical parameters. This property may explain the presence of symmetrical repeats and palindromes only in genes coding for beta-structural proteins-polypeptides, where amino acids with similar physical-chemical properties occupy symmetrical positions. A stochastic model of evolution of polynucleotide sequences has been used for analysis of inverted repeats. The modelling demonstrated that only limiting of sequences (uneven frequencies of used codons) is enough for arising of nonrandom inverted repeats in genes.

  7. TORBEAM 2.0, a paraxial beam tracing code for electron-cyclotron beams in fusion plasmas for extended physics applications

    NASA Astrophysics Data System (ADS)

    Poli, E.; Bock, A.; Lochbrunner, M.; Maj, O.; Reich, M.; Snicker, A.; Stegmeir, A.; Volpe, F.; Bertelli, N.; Bilato, R.; Conway, G. D.; Farina, D.; Felici, F.; Figini, L.; Fischer, R.; Galperti, C.; Happel, T.; Lin-Liu, Y. R.; Marushchenko, N. B.; Mszanowski, U.; Poli, F. M.; Stober, J.; Westerhof, E.; Zille, R.; Peeters, A. G.; Pereverzev, G. V.

    2018-04-01

    The paraxial WKB code TORBEAM (Poli, 2001) is widely used for the description of electron-cyclotron waves in fusion plasmas, retaining diffraction effects through the solution of a set of ordinary differential equations. With respect to its original form, the code has undergone significant transformations and extensions, in terms of both the physical model and the spectrum of applications. The code has been rewritten in Fortran 90 and transformed into a library, which can be called from within different (not necessarily Fortran-based) workflows. The models for both absorption and current drive have been extended, including e.g. fully-relativistic calculation of the absorption coefficient, momentum conservation in electron-electron collisions and the contribution of more than one harmonic to current drive. The code can be run also for reflectometry applications, with relativistic corrections for the electron mass. Formulas that provide the coupling between the reflected beam and the receiver have been developed. Accelerated versions of the code are available, with the reduced physics goal of inferring the location of maximum absorption (including or not the total driven current) for a given setting of the launcher mirrors. Optionally, plasma volumes within given flux surfaces and corresponding values of minimum and maximum magnetic field can be provided externally to speed up the calculation of full driven-current profiles. These can be employed in real-time control algorithms or for fast data analysis.

  8. Simulation of Laser Cooling and Trapping in Engineering Applications

    NASA Technical Reports Server (NTRS)

    Ramirez-Serrano, Jaime; Kohel, James; Thompson, Robert; Yu, Nan; Lunblad, Nathan

    2005-01-01

    An advanced computer code is undergoing development for numerically simulating laser cooling and trapping of large numbers of atoms. The code is expected to be useful in practical engineering applications and to contribute to understanding of the roles that light, atomic collisions, background pressure, and numbers of particles play in experiments using laser-cooled and -trapped atoms. The code is based on semiclassical theories of the forces exerted on atoms by magnetic and optical fields. Whereas computer codes developed previously for the same purpose account for only a few physical mechanisms, this code incorporates many more physical mechanisms (including atomic collisions, sub-Doppler cooling mechanisms, Stark and Zeeman energy shifts, gravitation, and evanescent-wave phenomena) that affect laser-matter interactions and the cooling of atoms to submillikelvin temperatures. Moreover, whereas the prior codes can simulate the interactions of at most a few atoms with a resonant light field, the number of atoms that can be included in a simulation by the present code is limited only by computer memory. Hence, the present code represents more nearly completely the complex physics involved when using laser-cooled and -trapped atoms in engineering applications. Another advantage that the code incorporates is the possibility to analyze the interaction between cold atoms of different atomic number. Some properties that cold atoms of different atomic species have, like cross sections and the particular excited states they can occupy when interacting with each other and light fields, play important roles not yet completely understood in the new experiments that are under way in laboratories worldwide to form ultracold molecules. Other research efforts use cold atoms as holders of quantum information, and more recent developments in cavity quantum electrodynamics also use ultracold atoms to explore and expand new information-technology ideas. These experiments give a hint on the wide range of applications and technology developments that can be tackled using cold atoms and light fields. From more precise atomic clocks and gravity sensors to the development of quantum computers, there will be a need to completely understand the whole ensemble of physical mechanisms that play a role in the development of such technologies. The code also permits the study of the dynamic and steady-state operations of technologies that use cold atoms. The physical characteristics of lasers and fields can be time-controlled to give a realistic simulation of the processes involved such that the design process can determine the best control features to use. It is expected that with the features incorporated into the code it will become a tool for the useful application of ultracold atoms in engineering applications. Currently, the software is being used for the analysis and understanding of simple experiments using cold atoms, and for the design of a modular compact source of cold atoms to be used in future research and development projects. The results so far indicate that the code is a useful design instrument that shows good agreement with experimental measurements (see figure), and a Windows-based user-friendly interface is also under development.

  9. Charged particle tracking through electrostatic wire meshes using the finite element method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devlin, L. J.; Karamyshev, O.; Welsch, C. P., E-mail: carsten.welsch@cockcroft.ac.uk

    Wire meshes are used across many disciplines to accelerate and focus charged particles, however, analytical solutions are non-exact and few codes exist which simulate the exact fields around a mesh with physical sizes. A tracking code based in Matlab-Simulink using field maps generated using finite element software has been developed which tracks electrons or ions through electrostatic wire meshes. The fields around such a geometry are presented as an analytical expression using several basic assumptions, however, it is apparent that computational calculations are required to obtain realistic values of electric potential and fields, particularly when multiple wire meshes are deployed.more » The tracking code is flexible in that any quantitatively describable particle distribution can be used for both electrons and ions as well as other benefits such as ease of export to other programs for analysis. The code is made freely available and physical examples are highlighted where this code could be beneficial for different applications.« less

  10. Data Assimilation - Advances and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less

  11. Toward a first-principles integrated simulation of tokamak edge plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, C S; Klasky, Scott A; Cummings, Julian

    2008-01-01

    Performance of the ITER is anticipated to be highly sensitive to the edge plasma condition. The edge pedestal in ITER needs to be predicted from an integrated simulation of the necessary firstprinciples, multi-scale physics codes. The mission of the SciDAC Fusion Simulation Project (FSP) Prototype Center for Plasma Edge Simulation (CPES) is to deliver such a code integration framework by (1) building new kinetic codes XGC0 and XGC1, which can simulate the edge pedestal buildup; (2) using and improving the existing MHD codes ELITE, M3D-OMP, M3D-MPP and NIMROD, for study of large-scale edge instabilities called Edge Localized Modes (ELMs); andmore » (3) integrating the codes into a framework using cutting-edge computer science technology. Collaborative effort among physics, computer science, and applied mathematics within CPES has created the first working version of the End-to-end Framework for Fusion Integrated Simulation (EFFIS), which can be used to study the pedestal-ELM cycles.« less

  12. RELAP-7 Level 2 Milestone Report: Demonstration of a Steady State Single Phase PWR Simulation with RELAP-7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Andrs; Ray Berry; Derek Gaston

    The document contains the simulation results of a steady state model PWR problem with the RELAP-7 code. The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on INL's modern scientific software development framework - MOOSE (Multi-Physics Object-Oriented Simulation Environment). This report summarizes the initial results of simulating a model steady-state single phase PWR problem using the current version of the RELAP-7 code. The major purpose of this demonstration simulation is to show that RELAP-7 code can be rapidly developed to simulate single-phase reactor problems. RELAP-7more » is a new project started on October 1st, 2011. It will become the main reactor systems simulation toolkit for RISMC (Risk Informed Safety Margin Characterization) and the next generation tool in the RELAP reactor safety/systems analysis application series (the replacement for RELAP5). The key to the success of RELAP-7 is the simultaneous advancement of physical models, numerical methods, and software design while maintaining a solid user perspective. Physical models include both PDEs (Partial Differential Equations) and ODEs (Ordinary Differential Equations) and experimental based closure models. RELAP-7 will eventually utilize well posed governing equations for multiphase flow, which can be strictly verified. Closure models used in RELAP5 and newly developed models will be reviewed and selected to reflect the progress made during the past three decades. RELAP-7 uses modern numerical methods, which allow implicit time integration, higher order schemes in both time and space, and strongly coupled multi-physics simulations. RELAP-7 is written with object oriented programming language C++. Its development follows modern software design paradigms. The code is easy to read, develop, maintain, and couple with other codes. Most importantly, the modern software design allows the RELAP-7 code to evolve with time. RELAP-7 is a MOOSE-based application. MOOSE (Multiphysics Object-Oriented Simulation Environment) is a framework for solving computational engineering problems in a well-planned, managed, and coordinated way. By leveraging millions of lines of open source software packages, such as PETSC (a nonlinear solver developed at Argonne National Laboratory) and LibMesh (a Finite Element Analysis package developed at University of Texas), MOOSE significantly reduces the expense and time required to develop new applications. Numerical integration methods and mesh management for parallel computation are provided by MOOSE. Therefore RELAP-7 code developers only need to focus on physics and user experiences. By using the MOOSE development environment, RELAP-7 code is developed by following the same modern software design paradigms used for other MOOSE development efforts. There are currently over 20 different MOOSE based applications ranging from 3-D transient neutron transport, detailed 3-D transient fuel performance analysis, to long-term material aging. Multi-physics and multiple dimensional analyses capabilities can be obtained by coupling RELAP-7 and other MOOSE based applications and by leveraging with capabilities developed by other DOE programs. This allows restricting the focus of RELAP-7 to systems analysis-type simulations and gives priority to retain and significantly extend RELAP5's capabilities.« less

  13. The revised APTA code of ethics for the physical therapist and standards of ethical conduct for the physical therapist assistant: theory, purpose, process, and significance.

    PubMed

    Swisher, Laura Lee; Hiller, Peggy

    2010-05-01

    In June 2009, the House of Delegates (HOD) of the American Physical Therapy Association (APTA) passed a major revision of the APTA Code of Ethics for physical therapists and the Standards of Ethical Conduct for the Physical Therapist Assistant. The revised documents will be effective July 1, 2010. The purposes of this article are: (1) to provide a historical, professional, and theoretical context for this important revision; (2) to describe the 4-year revision process; (3) to examine major features of the documents; and (4) to discuss the significance of the revisions from the perspective of the maturation of physical therapy as a doctoring profession. PROCESS OF REVISION: The process for revision is delineated within the context of history and the Bylaws of APTA. FORMAT, STRUCTURE, AND CONTENT OF REVISED CORE ETHICS DOCUMENTS: The revised documents represent a significant change in format, level of detail, and scope of application. Previous APTA Codes of Ethics and Standards of Ethical Conduct for the Physical Therapist Assistant have delineated very broad general principles, with specific obligations spelled out in the Ethics and Judicial Committee's Guide for Professional Conduct and Guide for Conduct of the Physical Therapist Assistant. In contrast to the current documents, the revised documents address all 5 roles of the physical therapist, delineate ethical obligations in organizational and business contexts, and align with the tenets of Vision 2020. The significance of this revision is discussed within historical parameters, the implications for physical therapists and physical therapist assistants, the maturation of the profession, societal accountability and moral community, potential regulatory implications, and the inclusive and deliberative process of moral dialogue by which changes were developed, revised, and approved.

  14. Chromaticity calculations and code comparisons for x-ray lithography source XLS and SXLS rings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsa, Z.

    1988-06-16

    This note presents the chromaticity calculations and code comparison results for the (x-ray lithography source) XLS (Chasman Green, XUV Cosy lattice) and (2 magnet 4T) SXLS lattices, with the standard beam optic codes, including programs SYNCH88.5, MAD6, PATRICIA88.4, PATPET88.2, DIMAD, BETA, and MARYLIE. This analysis is a part of our ongoing accelerator physics code studies. 4 figs., 10 tabs.

  15. Development of a Space Radiation Monte-Carlo Computer Simulation Based on the FLUKE and Root Codes

    NASA Technical Reports Server (NTRS)

    Pinsky, L. S.; Wilson, T. L.; Ferrari, A.; Sala, Paola; Carminati, F.; Brun, R.

    2001-01-01

    The radiation environment in space is a complex problem to model. Trying to extrapolate the projections of that environment into all areas of the internal spacecraft geometry is even more daunting. With the support of our CERN colleagues, our research group in Houston is embarking on a project to develop a radiation transport tool that is tailored to the problem of taking the external radiation flux incident on any particular spacecraft and simulating the evolution of that flux through a geometrically accurate model of the spacecraft material. The output will be a prediction of the detailed nature of the resulting internal radiation environment within the spacecraft as well as its secondary albedo. Beyond doing the physics transport of the incident flux, the software tool we are developing will provide a self-contained stand-alone object-oriented analysis and visualization infrastructure. It will also include a graphical user interface and a set of input tools to facilitate the simulation of space missions in terms of nominal radiation models and mission trajectory profiles. The goal of this project is to produce a code that is considerably more accurate and user-friendly than existing Monte-Carlo-based tools for the evaluation of the space radiation environment. Furthermore, the code will be an essential complement to the currently existing analytic codes in the BRYNTRN/HZETRN family for the evaluation of radiation shielding. The code will be directly applicable to the simulation of environments in low earth orbit, on the lunar surface, on planetary surfaces (including the Earth) and in the interplanetary medium such as on a transit to Mars (and even in the interstellar medium). The software will include modules whose underlying physics base can continue to be enhanced and updated for physics content, as future data become available beyond the timeframe of the initial development now foreseen. This future maintenance will be available from the authors of FLUKA as part of their continuing efforts to support the users of the FLUKA code within the particle physics community. In keeping with the spirit of developing an evolving physics code, we are planning as part of this project, to participate in the efforts to validate the core FLUKA physics in ground-based accelerator test runs. The emphasis of these test runs will be the physics of greatest interest in the simulation of the space radiation environment. Such a tool will be of great value to planners, designers and operators of future space missions, as well as for the design of the vehicles and habitats to be used on such missions. It will also be of aid to future experiments of various kinds that may be affected at some level by the ambient radiation environment, or in the analysis of hybrid experiment designs that have been discussed for space-based astronomy and astrophysics. The tool will be of value to the Life Sciences personnel involved in the prediction and measurement of radiation doses experienced by the crewmembers on such missions. In addition, the tool will be of great use to the planners of experiments to measure and evaluate the space radiation environment itself. It can likewise be useful in the analysis of safe havens, hazard migration plans, and NASA's call for new research in composites and to NASA engineers modeling the radiation exposure of electronic circuits. This code will provide an important complimentary check on the predictions of analytic codes such as BRYNTRN/HZETRN that are presently used for many similar applications, and which have shortcomings that are more easily overcome with Monte Carlo type simulations. Finally, it is acknowledged that there are similar efforts based around the use of the GEANT4 Monte-Carlo transport code currently under development at CERN. It is our intention to make our software modular and sufficiently flexible to allow the parallel use of either FLUKA or GEANT4 as the physics transport engine.

  16. A generic framework for individual-based modelling and physical-biological interaction

    PubMed Central

    2018-01-01

    The increased availability of high-resolution ocean data globally has enabled more detailed analyses of physical-biological interactions and their consequences to the ecosystem. We present IBMlib, which is a versatile, portable and computationally effective framework for conducting Lagrangian simulations in the marine environment. The purpose of the framework is to handle complex individual-level biological models of organisms, combined with realistic 3D oceanographic model of physics and biogeochemistry describing the environment of the organisms without assumptions about spatial or temporal scales. The open-source framework features a minimal robust interface to facilitate the coupling between individual-level biological models and oceanographic models, and we provide application examples including forward/backward simulations, habitat connectivity calculations, assessing ocean conditions, comparison of physical circulation models, model ensemble runs and recently posterior Eulerian simulations using the IBMlib framework. We present the code design ideas behind the longevity of the code, our implementation experiences, as well as code performance benchmarking. The framework may contribute substantially to progresses in representing, understanding, predicting and eventually managing marine ecosystems. PMID:29351280

  17. An approach for coupled-code multiphysics core simulations from a common input

    DOE PAGES

    Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; ...

    2014-12-10

    This study describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which ismore » built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak Ridge National Laboratory using 1156 cores, and a synopsis of the solution results and code performance is presented. Finally, ongoing development of this approach is also briefly described.« less

  18. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    NASA Astrophysics Data System (ADS)

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.

    2016-02-01

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a ‘beam-in-a-box’ model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.

  19. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.

    2016-01-12

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a 'beam-in-a-box' model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components producemore » first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.« less

  20. Electromagnetic plasma simulation in realistic geometries

    NASA Astrophysics Data System (ADS)

    Brandon, S.; Ambrosiano, J. J.; Nielsen, D.

    1991-08-01

    Particle-in-Cell (PIC) calculations have become an indispensable tool to model the nonlinear collective behavior of charged particle species in electromagnetic fields. Traditional finite difference codes, such as CONDOR (2-D) and ARGUS (3-D), are used extensively to design experiments and develop new concepts. A wide variety of physical processes can be modeled simply and efficiently by these codes. However, experiments have become more complex. Geometrical shapes and length scales are becoming increasingly more difficult to model. Spatial resolution requirements for the electromagnetic calculation force large grids and small time steps. Many hours of CRAY YMP time may be required to complete 2-D calculation -- many more for 3-D calculations. In principle, the number of mesh points and particles need only to be increased until all relevant physical processes are resolved. In practice, the size of a calculation is limited by the computer budget. As a result, experimental design is being limited by the ability to calculate, not by the experimenters ingenuity or understanding of the physical processes involved. Several approaches to meet these computational demands are being pursued. Traditional PIC codes continue to be the major design tools. These codes are being actively maintained, optimized, and extended to handle large and more complex problems. Two new formulations are being explored to relax the geometrical constraints of the finite difference codes. A modified finite volume test code, TALUS, uses a data structure compatible with that of standard finite difference meshes. This allows a basic conformal boundary/variable grid capability to be retrofitted to CONDOR. We are also pursuing an unstructured grid finite element code, MadMax. The unstructured mesh approach provides maximum flexibility in the geometrical model while also allowing local mesh refinement.

  1. Supporting the Virtual Soldier With a Physics-Based Software Architecture

    DTIC Science & Technology

    2005-06-01

    simple approach taken here). Rather, this paper demonstrates how existing solution schemes can rapidly expand; it embraces all theoretical solution... bodyj . In (5) the superscript ’T’ accompanying a vector denotes the transposition of the vector. The constraint force and moment are defined as F C=Z1 a a...FE codes as there are meshes, and the requested MD code. This is described next. Exactly how the PM instantiated each physics process became an issue

  2. From model conception to verification and validation, a global approach to multiphase Navier-Stoke models with an emphasis on volcanic explosive phenomenology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dartevelle, Sebastian

    2007-10-01

    Large-scale volcanic eruptions are hazardous events that cannot be described by detailed and accurate in situ measurement: hence, little to no real-time data exists to rigorously validate current computer models of these events. In addition, such phenomenology involves highly complex, nonlinear, and unsteady physical behaviors upon many spatial and time scales. As a result, volcanic explosive phenomenology is poorly understood in terms of its physics, and inadequately constrained in terms of initial, boundary, and inflow conditions. Nevertheless, code verification and validation become even more critical because more and more volcanologists use numerical data for assessment and mitigation of volcanic hazards.more » In this report, we evaluate the process of model and code development in the context of geophysical multiphase flows. We describe: (1) the conception of a theoretical, multiphase, Navier-Stokes model, (2) its implementation into a numerical code, (3) the verification of the code, and (4) the validation of such a model within the context of turbulent and underexpanded jet physics. Within the validation framework, we suggest focusing on the key physics that control the volcanic clouds—namely, momentum-driven supersonic jet and buoyancy-driven turbulent plume. For instance, we propose to compare numerical results against a set of simple and well-constrained analog experiments, which uniquely and unambiguously represent each of the key-phenomenology. Key« less

  3. Chemical and physical characterization of the first stages of protoplanetary disk formation

    NASA Astrophysics Data System (ADS)

    Hincelin, Ugo

    2012-12-01

    Low mass stars, like our Sun, are born from the collapse of a molecular cloud. The matter falls in the center of the cloud, creating a protoplanetary disk surrounding a protostar. Planets and other Solar System bodies will be formed in the disk. The chemical composition of the interstellar matter and its evolution during the formation of the disk are important to better understand the formation process of these objects. I studied the chemical and physical evolution of this matter, from the cloud to the disk, using the chemical gas-grain code Nautilus. A sensitivity study to some parameters of the code (such as elemental abundances and parameters of grain surface chemistry) has been done. More particularly, the updates of rate coefficients and branching ratios of the reactions of our chemical network showed their importance, such as on the abundances of some chemical species, and on the code sensitivity to others parameters. Several physical models of collapsing dense core have also been considered. The more complex and solid approach has been to interface our chemical code with the radiation-magneto-hydrodynamic model of stellar formation RAMSES, in order to model in three dimensions the physical and chemical evolution of a young disk formation. Our study showed that the disk keeps imprints of the past history of the matter, and so its chemical composition is sensitive to the initial conditions.

  4. Policy challenges in the fight against childhood obesity: low adherence in San Diego area schools to the California Education Code regulating physical education.

    PubMed

    Consiglieri, G; Leon-Chi, L; Newfield, R S

    2013-01-01

    Assess the adherence to the Physical Education (PE) requirements per California Education Code in San Diego area schools. Surveys were administered anonymously to children and adolescents capable of physical activity, visiting a specialty clinic at Rady Children's Hospital San Diego. The main questions asked were their gender, grade, PE classes per week, and time spent doing PE. 324 surveys were filled, with 36 charter-school students not having to abide by state code excluded. We report on 288 students (59% females), mostly Hispanic (43%) or Caucasian (34%). In grades 1-6, 66.7% reported under the 200 min per 10 school days required by the PE code. Only 20.7% had daily PE. Average PE days/week was 2.6. In grades 7-12, 42.2% had reported under the 400 min per 10 school days required. Daily PE was noted in 47.8%. Average PE days/week was 3.4. Almost 17% had no PE, more so in the final two grades of high school (45.7%). There is low adherence to the California Physical Education mandate in the San Diego area, contributing to poor fitness and obesity. Lack of adequate PE is most evident in grades 1-6 and grades 11-12. Better resources, awareness, and enforcement are crucial.

  5. What do US and Canadian parents do to encourage or discourage physical activity among their 5-12 Year old children?

    PubMed

    Tu, Andrew W; O'Connor, Teresia M; Beauchamp, Mark R; Hughes, Sheryl O; Baranowski, Tom; Mâsse, Louise C

    2017-12-01

    Parents have the potential to substantively influence their child's physical activity. This study identified the parenting practices of US and Canadian parents to encourage or discourage their 5-12 year-old child's physical activity and to examine differences in parenting practices by country, parental sex, age of child, and income. The sample consisted of 134 US and Canadian parents (54.5% US; 60.4% female) recruited from a web-based panel by a polling firm. The parents answered open-ended questions about what they and other parents do to encourage or discourage their child to be active. Responses were coded using a scheme previously developed to code items used in the published literature. Coded responses were summarized by domain and dimension with differences in responses by country, parental sex, age of child, or household income assessed with a log-linear analysis. The 134 parents provided 649 and 397 responses to ways that parents encourage or discourage their child's physical activity, respectively. Over 70% of responses for practices that encourage physical activity were related to structure of the environment, parental encouragement, and co-participation. The most common response was co-participation in activity with the child. Of the practices that discourage physical activity, 67% were related to structure of the environment, lack of parental control, and modeling poor behaviors. The most common response was allowing screen time. There were no differences in response by country, parental sex, child age, or household income. Parents most often encouraged physical activity through structure and emotional support and discouraged physical activity through lack of structure and control. Understanding how parents influence their child's physical activity may help improve intervention strategies. The current results will inform the development of a physical activity parenting practices instrument.

  6. GPU acceleration of the Locally Selfconsistent Multiple Scattering code for first principles calculation of the ground state and statistical physics of materials

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; Rennich, Steven; Rogers, James H.

    2017-02-01

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.

  7. GPU acceleration of the Locally Selfconsistent Multiple Scattering code for first principles calculation of the ground state and statistical physics of materials

    DOE PAGES

    Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; ...

    2016-07-12

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn–Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. In this paper, we present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Finally, using the Craymore » XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.« less

  8. Sandia National Laboratories analysis code data base

    NASA Astrophysics Data System (ADS)

    Peterson, C. W.

    1994-11-01

    Sandia National Laboratories' mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The laboratories' strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia's technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems, and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code 'ownership' and release status, and references describing the physical models and numerical implementation.

  9. GBS: Global 3D simulation of tokamak edge region

    NASA Astrophysics Data System (ADS)

    Zhu, Ben; Fisher, Dustin; Rogers, Barrett; Ricci, Paolo

    2012-10-01

    A 3D two-fluid global code, namely Global Braginskii Solver (GBS), is being developed to explore the physics of turbulent transport, confinement, self-consistent profile formation, pedestal scaling and related phenomena in the edge region of tokamaks. Aimed at solving drift-reduced Braginskii equations [1] in complex magnetic geometry, the GBS is used for turbulence simulation in SOL region. In the recent upgrade, the simulation domain is expanded into close flux region with twist-shift boundary conditions. Hence, the new GBS code is able to explore global transport physics in an annular full-torus domain from the top of the pedestal into the far SOL. We are in the process of identifying and analyzing the linear and nonlinear instabilities in the system using the new GBS code. Preliminary results will be presented and compared with other codes if possible.[4pt] [1] A. Zeiler, J. F. Drake and B. Rogers, Phys. Plasmas 4, 2134 (1997)

  10. Improvement of Modeling HTGR Neutron Physics by Uncertainty Analysis with the Use of Cross-Section Covariance Information

    NASA Astrophysics Data System (ADS)

    Boyarinov, V. F.; Grol, A. V.; Fomichenko, P. A.; Ternovykh, M. Yu

    2017-01-01

    This work is aimed at improvement of HTGR neutron physics design calculations by application of uncertainty analysis with the use of cross-section covariance information. Methodology and codes for preparation of multigroup libraries of covariance information for individual isotopes from the basic 44-group library of SCALE-6 code system were developed. A 69-group library of covariance information in a special format for main isotopes and elements typical for high temperature gas cooled reactors (HTGR) was generated. This library can be used for estimation of uncertainties, associated with nuclear data, in analysis of HTGR neutron physics with design codes. As an example, calculations of one-group cross-section uncertainties for fission and capture reactions for main isotopes of the MHTGR-350 benchmark, as well as uncertainties of the multiplication factor (k∞) for the MHTGR-350 fuel compact cell model and fuel block model were performed. These uncertainties were estimated by the developed technology with the use of WIMS-D code and modules of SCALE-6 code system, namely, by TSUNAMI, KENO-VI and SAMS. Eight most important reactions on isotopes for MHTGR-350 benchmark were identified, namely: 10B(capt), 238U(n,γ), ν5, 235U(n,γ), 238U(el), natC(el), 235U(fiss)-235U(n,γ), 235U(fiss).

  11. Maestro and Castro: Simulation Codes for Astrophysical Flows

    NASA Astrophysics Data System (ADS)

    Zingale, Michael; Almgren, Ann; Beckner, Vince; Bell, John; Friesen, Brian; Jacobs, Adam; Katz, Maximilian P.; Malone, Christopher; Nonaka, Andrew; Zhang, Weiqun

    2017-01-01

    Stellar explosions are multiphysics problems—modeling them requires the coordinated input of gravity solvers, reaction networks, radiation transport, and hydrodynamics together with microphysics recipes to describe the physics of matter under extreme conditions. Furthermore, these models involve following a wide range of spatial and temporal scales, which puts tough demands on simulation codes. We developed the codes Maestro and Castro to meet the computational challenges of these problems. Maestro uses a low Mach number formulation of the hydrodynamics to efficiently model convection. Castro solves the fully compressible radiation hydrodynamics equations to capture the explosive phases of stellar phenomena. Both codes are built upon the BoxLib adaptive mesh refinement library, which prepares them for next-generation exascale computers. Common microphysics shared between the codes allows us to transfer a problem from the low Mach number regime in Maestro to the explosive regime in Castro. Importantly, both codes are freely available (https://github.com/BoxLib-Codes). We will describe the design of the codes and some of their science applications, as well as future development directions.Support for development was provided by NSF award AST-1211563 and DOE/Office of Nuclear Physics grant DE-FG02-87ER40317 to Stony Brook and by the Applied Mathematics Program of the DOE Office of Advance Scientific Computing Research under US DOE contract DE-AC02-05CH11231 to LBNL.

  12. Extremely accurate sequential verification of RELAP5-3D

    DOE PAGES

    Mesina, George L.; Aumiller, David L.; Buschman, Francis X.

    2015-11-19

    Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less

  13. Status of BOUT fluid turbulence code: improvements and verification

    NASA Astrophysics Data System (ADS)

    Umansky, M. V.; Lodestro, L. L.; Xu, X. Q.

    2006-10-01

    BOUT is an electromagnetic fluid turbulence code for tokamak edge plasma [1]. BOUT performs time integration of reduced Braginskii plasma fluid equations, using spatial discretization in realistic geometry and employing a standard ODE integration package PVODE. BOUT has been applied to several tokamak experiments and in some cases calculated spectra of turbulent fluctuations compared favorably to experimental data. On the other hand, the desire to understand better the code results and to gain more confidence in it motivated investing effort in rigorous verification of BOUT. Parallel to the testing the code underwent substantial modification, mainly to improve its readability and tractability of physical terms, with some algorithmic improvements as well. In the verification process, a series of linear and nonlinear test problems was applied to BOUT, targeting different subgroups of physical terms. The tests include reproducing basic electrostatic and electromagnetic plasma modes in simplified geometry, axisymmetric benchmarks against the 2D edge code UEDGE in real divertor geometry, and neutral fluid benchmarks against the hydrodynamic code LCPFCT. After completion of the testing, the new version of the code is being applied to actual tokamak edge turbulence problems, and the results will be presented. [1] X. Q. Xu et al., Contr. Plas. Phys., 36,158 (1998). *Work performed for USDOE by Univ. Calif. LLNL under contract W-7405-ENG-48.

  14. Assessing Preschool Children's Physical Activity: The Observational System for Recording Physical Activity in Children-Preschool Version

    ERIC Educational Resources Information Center

    Brown, William H.; Pfeiffer, Karin A.; McIver, Kerry L.; Dowda, Marsha; Almeida, M. Joao C. A.; Pate, Russell R.

    2006-01-01

    In this paper we present initial information concerning a new direct observation system--the Observational System for Recording Physical Activity in Children-Preschool Version. The system will allow researchers to record young children's physical activity levels while also coding the topography of their physical activity, as well as detailed…

  15. Toward Supersonic Retropropulsion CFD Validation

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Schauerhamer, D. Guy; Trumble, Kerry; Sozer, Emre; Barnhardt, Michael; Carlson, Jan-Renee; Edquist, Karl

    2011-01-01

    This paper begins the process of verifying and validating computational fluid dynamics (CFD) codes for supersonic retropropulsive flows. Four CFD codes (DPLR, FUN3D, OVERFLOW, and US3D) are used to perform various numerical and physical modeling studies toward the goal of comparing predictions with a wind tunnel experiment specifically designed to support CFD validation. Numerical studies run the gamut in rigor from code-to-code comparisons to observed order-of-accuracy tests. Results indicate that this complex flowfield, involving time-dependent shocks and vortex shedding, design order of accuracy is not clearly evident. Also explored is the extent of physical modeling necessary to predict the salient flowfield features found in high-speed Schlieren images and surface pressure measurements taken during the validation experiment. Physical modeling studies include geometric items such as wind tunnel wall and sting mount interference, as well as turbulence modeling that ranges from a RANS (Reynolds-Averaged Navier-Stokes) 2-equation model to DES (Detached Eddy Simulation) models. These studies indicate that tunnel wall interference is minimal for the cases investigated; model mounting hardware effects are confined to the aft end of the model; and sparse grid resolution and turbulence modeling can damp or entirely dissipate the unsteadiness of this self-excited flow.

  16. MO-E-18C-04: Advanced Computer Simulation and Visualization Tools for Enhanced Understanding of Core Medical Physics Concepts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naqvi, S

    2014-06-15

    Purpose: Most medical physics programs emphasize proficiency in routine clinical calculations and QA. The formulaic aspect of these calculations and prescriptive nature of measurement protocols obviate the need to frequently apply basic physical principles, which, therefore, gradually decay away from memory. E.g. few students appreciate the role of electron transport in photon dose, making it difficult to understand key concepts such as dose buildup, electronic disequilibrium effects and Bragg-Gray theory. These conceptual deficiencies manifest when the physicist encounters a new system, requiring knowledge beyond routine activities. Methods: Two interactive computer simulation tools are developed to facilitate deeper learning of physicalmore » principles. One is a Monte Carlo code written with a strong educational aspect. The code can “label” regions and interactions to highlight specific aspects of the physics, e.g., certain regions can be designated as “starters” or “crossers,” and any interaction type can be turned on and off. Full 3D tracks with specific portions highlighted further enhance the visualization of radiation transport problems. The second code calculates and displays trajectories of a collection electrons under arbitrary space/time dependent Lorentz force using relativistic kinematics. Results: Using the Monte Carlo code, the student can interactively study photon and electron transport through visualization of dose components, particle tracks, and interaction types. The code can, for instance, be used to study kerma-dose relationship, explore electronic disequilibrium near interfaces, or visualize kernels by using interaction forcing. The electromagnetic simulator enables the student to explore accelerating mechanisms and particle optics in devices such as cyclotrons and linacs. Conclusion: The proposed tools are designed to enhance understanding of abstract concepts by highlighting various aspects of the physics. The simulations serve as virtual experiments that give deeper and long lasting understanding of core principles. The student can then make sound judgements in novel situations encountered beyond routine clinical activities.« less

  17. Conference Proceedings on Ionospheric Modification and Its Potential to Enhance or Degrade the Performance of Military Systems Held in Bergen, Norway on 28-31 May 1990 (La Modification de l’Ionosphere et son Potentiel d’Amelioration ou de Degradation des Performances des Systemes Militaires)

    DTIC Science & Technology

    1990-05-31

    hbeter at Arecibo. shbing teperral evolution of spotita diattlrbutio. (Barrnhardt at al, 1988). ftAStdA LOC WEf SP2CTh 5VF £NHAIJCRV DOWIr FUTO C...astt tubsim. Ahmy, Phpe. Iubidt. 31,73(1669) (1631 LM.Dsgtyue, R.SJqduv. Ojieolmv, V.DA-dhap, mad Vlmhuhebm, O"fm.4ai msAl Lmwmm telbahaci leo J1.h P...aestual gas devaity drop rapid" as the beam expands, When the neutral dentsity Is low, so bIto a robability of knsazanin. Them, If CV bn to occur. It

  18. General linear codes for fault-tolerant matrix operations on processor arrays

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Abraham, J. A.

    1988-01-01

    Various checksum codes have been suggested for fault-tolerant matrix computations on processor arrays. Use of these codes is limited due to potential roundoff and overflow errors. Numerical errors may also be misconstrued as errors due to physical faults in the system. In this a set of linear codes is identified which can be used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU-decomposition, with minimum numerical error. Encoding schemes are given for some of the example codes which fall under the general set of codes. With the help of experiments, a rule of thumb for the selection of a particular code for a given application is derived.

  19. Code-to-Code Comparison, and Material Response Modeling of Stardust and MSL using PATO and FIAT

    NASA Technical Reports Server (NTRS)

    Omidy, Ali D.; Panerai, Francesco; Martin, Alexandre; Lachaud, Jean R.; Cozmuta, Ioana; Mansour, Nagi N.

    2015-01-01

    This report provides a code-to-code comparison between PATO, a recently developed high fidelity material response code, and FIAT, NASA's legacy code for ablation response modeling. The goal is to demonstrates that FIAT and PATO generate the same results when using the same models. Test cases of increasing complexity are used, from both arc-jet testing and flight experiment. When using the exact same physical models, material properties and boundary conditions, the two codes give results that are within 2% of errors. The minor discrepancy is attributed to the inclusion of the gas phase heat capacity (cp) in the energy equation in PATO, and not in FIAT.

  20. A preliminary Monte Carlo study for the treatment head of a carbon-ion radiotherapy facility using TOPAS

    NASA Astrophysics Data System (ADS)

    Liu, Hongdong; Zhang, Lian; Chen, Zhi; Liu, Xinguo; Dai, Zhongying; Li, Qiang; Xu, Xie George

    2017-09-01

    In medical physics it is desirable to have a Monte Carlo code that is less complex, reliable yet flexible for dose verification, optimization, and component design. TOPAS is a newly developed Monte Carlo simulation tool which combines extensive radiation physics libraries available in Geant4 code, easyto-use geometry and support for visualization. Although TOPAS has been widely tested and verified in simulations of proton therapy, there has been no reported application for carbon ion therapy. To evaluate the feasibility and accuracy of TOPAS simulations for carbon ion therapy, a licensed TOPAS code (version 3_0_p1) was used to carry out a dosimetric study of therapeutic carbon ions. Results of depth dose profile based on different physics models have been obtained and compared with the measurements. It is found that the G4QMD model is at least as accurate as the TOPAS default BIC physics model for carbon ions, but when the energy is increased to relatively high levels such as 400 MeV/u, the G4QMD model shows preferable performance. Also, simulations of special components used in the treatment head at the Institute of Modern Physics facility was conducted to investigate the Spread-Out dose distribution in water. The physical dose in water of SOBP was found to be consistent with the aim of the 6 cm ridge filter.

  1. electromagnetics, eddy current, computer codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gartling, David

    TORO Version 4 is designed for finite element analysis of steady, transient and time-harmonic, multi-dimensional, quasi-static problems in electromagnetics. The code allows simulation of electrostatic fields, steady current flows, magnetostatics and eddy current problems in plane or axisymmetric, two-dimensional geometries. TORO is easily coupled to heat conduction and solid mechanics codes to allow multi-physics simulations to be performed.

  2. RELAP-7 Closure Correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Berry, R. A.; Martineau, R. C.

    The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework, MOOSE (Multi-Physics Object Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s and TRACE’s capabilities and extends their analysis capabilities for all reactor system simulation scenarios. The RELAP-7 codemore » utilizes the well-posed 7-equation two-phase flow model for compressible two-phase flow. Closure models used in the TRACE code has been reviewed and selected to reflect the progress made during the past decades and provide a basis for the colure correlations implemented in the RELAP-7 code. This document provides a summary on the closure correlations that are currently implemented in the RELAP-7 code. The closure correlations include sub-grid models that describe interactions between the fluids and the flow channel, and interactions between the two phases.« less

  3. The EGS5 Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, Hideo; Namito, Yoshihito; /KEK, Tsukuba

    2005-12-20

    In the nineteen years since EGS4 was released, it has been used in a wide variety of applications, particularly in medical physics, radiation measurement studies, and industrial development. Every new user and every new application bring new challenges for Monte Carlo code designers, and code refinements and bug fixes eventually result in a code that becomes difficult to maintain. Several of the code modifications represented significant advances in electron and photon transport physics, and required a more substantial invocation than code patching. Moreover, the arcane MORTRAN3[48] computer language of EGS4, was highest on the complaint list of the users ofmore » EGS4. The size of the EGS4 user base is difficult to measure, as there never existed a formal user registration process. However, some idea of the numbers may be gleaned from the number of EGS4 manuals that were produced and distributed at SLAC: almost three thousand. Consequently, the EGS5 project was undertaken. It was decided to employ the FORTRAN 77 compiler, yet include as much as possible, the structural beauty and power of MORTRAN3. This report consists of four chapters and several appendices. Chapter 1 is an introduction to EGS5 and to this report in general. We suggest that you read it. Chapter 2 is a major update of similar chapters in the old EGS4 report[126] (SLAC-265) and the old EGS3 report[61] (SLAC-210), in which all the details of the old physics (i.e., models which were carried over from EGS4) and the new physics are gathered together. The descriptions of the new physics are extensive, and not for the faint of heart. Detailed knowledge of the contents of Chapter 2 is not essential in order to use EGS, but sophisticated users should be aware of its contents. In particular, details of the restrictions on the range of applicability of EGS are dispersed throughout the chapter. First-time users of EGS should skip Chapter 2 and come back to it later if necessary. With the release of the EGS4 version, a deliberate attempt was made to present example problems in order to help the user ''get started'', and we follow that spirit in this report. A series of elementary tutorial user codes are presented in Chapter 3, with more sophisticated sample user codes described in Chapter 4. Novice EGS users will find it helpful to read through the initial sections of the EGS5 User Manual (provided in Appendix B of this report), proceeding then to work through the tutorials in Chapter 3. The User Manuals and other materials found in the appendices contain detailed flow charts, variable lists, and subprogram descriptions of EGS5 and PEGS. Included are step-by-step instructions for developing basic EGS5 user codes and for accessing all of the physics options available in EGS5 and PEGS. Once acquainted with the basic structure of EGS5, users should find the appendices the most frequently consulted sections of this report.« less

  4. Evaluation of CFETR as a Fusion Nuclear Science Facility using multiple system codes

    NASA Astrophysics Data System (ADS)

    Chan, V. S.; Costley, A. E.; Wan, B. N.; Garofalo, A. M.; Leuer, J. A.

    2015-02-01

    This paper presents the results of a multi-system codes benchmarking study of the recently published China Fusion Engineering Test Reactor (CFETR) pre-conceptual design (Wan et al 2014 IEEE Trans. Plasma Sci. 42 495). Two system codes, General Atomics System Code (GASC) and Tokamak Energy System Code (TESC), using different methodologies to arrive at CFETR performance parameters under the same CFETR constraints show that the correlation between the physics performance and the fusion performance is consistent, and the computed parameters are in good agreement. Optimization of the first wall surface for tritium breeding and the minimization of the machine size are highly compatible. Variations of the plasma currents and profiles lead to changes in the required normalized physics performance, however, they do not significantly affect the optimized size of the machine. GASC and TESC have also been used to explore a lower aspect ratio, larger volume plasma taking advantage of the engineering flexibility in the CFETR design. Assuming the ITER steady-state scenario physics, the larger plasma together with a moderately higher BT and Ip can result in a high gain Qfus ˜ 12, Pfus ˜ 1 GW machine approaching DEMO-like performance. It is concluded that the CFETR baseline mode can meet the minimum goal of the Fusion Nuclear Science Facility (FNSF) mission and advanced physics will enable it to address comprehensively the outstanding critical technology gaps on the path to a demonstration reactor (DEMO). Before proceeding with CFETR construction steady-state operation has to be demonstrated, further development is needed to solve the divertor heat load issue, and blankets have to be designed with tritium breeding ratio (TBR) >1 as a target.

  5. The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test

    DOE PAGES

    Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain; ...

    2016-12-20

    Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less

  6. The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain

    Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less

  7. THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT. II. ISOLATED DISK TEST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain

    Using an isolated Milky Way-mass galaxy simulation, we compare results from nine state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt–Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly formed stellar clump mass functions show more significant variation (difference by up to a factor of ∼3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low-density region, and between more diffusive and less diffusive schemes in the high-density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less

  8. Particle-gas dynamics in the protoplanetary nebula

    NASA Technical Reports Server (NTRS)

    Cuzzi, Jeffrey N.; Champney, Joelle M.; Dobrovolskis, Anthony R.

    1991-01-01

    In the past year we made significant progress in improving our fundamental understanding of the physics of particle-gas dynamics in the protoplanetary nebula. Having brought our code to a state of fairly robust functionality, we devoted significant effort to optimizing it for running long cases. We optimized the code for vectorization to the extent that it now runs eight times faster than before. The following subject areas are covered: physical improvements to the model; numerical results; Reynolds averaging of fluid equations; and modeling of turbulence and viscosity.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpenter, John H.; Belcourt, Kenneth Noel

    Completion of the CASL L3 milestone THM.CFD.P6.03 provides a tabular material properties capability to the Hydra code. A tabular interpolation package used in Sandia codes was modified to support the needs of multi-phase solvers in Hydra. Use of the interface is described. The package was released to Hydra under a government use license. A dummy physics was created in Hydra to prototype use of the interpolation routines. Finally, a test using the dummy physics verifies the correct behavior of the interpolation for a test water table. 3

  10. Physical-Layer Network Coding for VPN in TDM-PON

    NASA Astrophysics Data System (ADS)

    Wang, Qike; Tse, Kam-Hon; Chen, Lian-Kuan; Liew, Soung-Chang

    2012-12-01

    We experimentally demonstrate a novel optical physical-layer network coding (PNC) scheme over time-division multiplexing (TDM) passive optical network (PON). Full-duplex error-free communications between optical network units (ONUs) at 2.5 Gb/s are shown for all-optical virtual private network (VPN) applications. Compared to the conventional half-duplex communications set-up, our scheme can increase the capacity by 100% with power penalty smaller than 3 dB. Synchronization of two ONUs is not required for the proposed VPN scheme

  11. Progress in The Semantic Analysis of Scientific Code

    NASA Technical Reports Server (NTRS)

    Stewart, Mark

    2000-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.

  12. CPIC: a curvilinear Particle-In-Cell code for plasma-material interaction studies

    NASA Astrophysics Data System (ADS)

    Delzanno, G.; Camporeale, E.; Moulton, J. D.; Borovsky, J. E.; MacDonald, E.; Thomsen, M. F.

    2012-12-01

    We present a recently developed Particle-In-Cell (PIC) code in curvilinear geometry called CPIC (Curvilinear PIC) [1], where the standard PIC algorithm is coupled with a grid generation/adaptation strategy. Through the grid generator, which maps the physical domain to a logical domain where the grid is uniform and Cartesian, the code can simulate domains of arbitrary complexity, including the interaction of complex objects with a plasma. At present the code is electrostatic. Poisson's equation (in logical space) can be solved with either an iterative method based on the Conjugate Gradient (CG) or the Generalized Minimal Residual (GMRES) coupled with a multigrid solver used as a preconditioner, or directly with multigrid. The multigrid strategy is critical for the solver to perform optimally or nearly optimally as the dimension of the problem increases. CPIC also features a hybrid particle mover, where the computational particles are characterized by position in logical space and velocity in physical space. The advantage of a hybrid mover, as opposed to more conventional movers that move particles directly in the physical space, is that the interpolation of the particles in logical space is straightforward and computationally inexpensive, since one does not have to track the position of the particle. We will present our latest progress on the development of the code and document the code performance on standard plasma-physics tests. Then we will present the (preliminary) application of the code to a basic dynamic-charging problem, namely the charging and shielding of a spherical spacecraft in a magnetized plasma for various level of magnetization and including the pulsed emission of an electron beam from the spacecraft. The dynamical evolution of the sheath and the time-dependent current collection will be described. This study is in support of the ConnEx mission concept to use an electron beam from a magnetospheric spacecraft to trace magnetic field lines from the magnetosphere to the ionosphere [2]. [1] G.L. Delzanno, E. Camporeale, "CPIC: a new Particle-in-Cell code for plasma-material interaction studies", in preparation (2012). [2] J.E. Borovsky, D.J. McComas, M.F. Thomsen, J.L. Burch, J. Cravens, C.J. Pollock, T.E. Moore, and S.B. Mende, "Magnetosphere-Ionosphere Observatory (MIO): A multisatellite mission designed to solve the problem of what generates auroral arcs," Eos. Trans. Amer. Geophys. Union 79 (45), F744 (2000).

  13. Transversal Clifford gates on folded surface codes

    DOE PAGES

    Moussa, Jonathan E.

    2016-10-12

    Surface and color codes are two forms of topological quantum error correction in two spatial dimensions with complementary properties. Surface codes have lower-depth error detection circuits and well-developed decoders to interpret and correct errors, while color codes have transversal Clifford gates and better code efficiency in the number of physical qubits needed to achieve a given code distance. A formal equivalence exists between color codes and folded surface codes, but it does not guarantee the transferability of any of these favorable properties. However, the equivalence does imply the existence of constant-depth circuit implementations of logical Clifford gates on folded surfacemore » codes. We achieve and improve this result by constructing two families of folded surface codes with transversal Clifford gates. This construction is presented generally for qudits of any dimension. Lastly, the specific application of these codes to universal quantum computation based on qubit fusion is also discussed.« less

  14. Availability of physical activity-related facilities and neighborhood demographic and socioeconomic characteristics: a national study.

    PubMed

    Powell, Lisa M; Slater, Sandy; Chaloupka, Frank J; Harper, Deborah

    2006-09-01

    We examined associations between neighborhood demographic characteristics and the availability of commercial physical activity-related outlets by zip code across the United States. Multivariate analyses were conducted to assess the availability of 4 types of outlets: (1) physical fitness facilities, (2) membership sports and recreation clubs, (3) dance facilities, and (4) public golf courses. Commercial outlet data were linked by zip code to US Census Bureau population and socioeconomic data. Results showed that commercial physical activity-related facilities were less likely to be present in lower-income neighborhoods and in neighborhoods with higher proportions of African American residents, residents with His-panic ethnicity, and residents of other racial minority backgrounds. In addition, these neighborhoods had fewer such facilities available. Lack of availability of facilities that enable and promote physical activity may, in part, underpin the lower levels of activity observed among populations of low socioeconomic status and minority backgrounds.

  15. Towards Realistic Implementations of a Majorana Surface Code.

    PubMed

    Landau, L A; Plugge, S; Sela, E; Altland, A; Albrecht, S M; Egger, R

    2016-02-05

    Surface codes have emerged as promising candidates for quantum information processing. Building on the previous idea to realize the physical qubits of such systems in terms of Majorana bound states supported by topological semiconductor nanowires, we show that the basic code operations, namely projective stabilizer measurements and qubit manipulations, can be implemented by conventional tunnel conductance probes and charge pumping via single-electron transistors, respectively. The simplicity of the access scheme suggests that a functional code might be in close experimental reach.

  16. A Multi-Scale, Multi-Physics Optimization Framework for Additively Manufactured Structural Components

    NASA Astrophysics Data System (ADS)

    El-Wardany, Tahany; Lynch, Mathew; Gu, Wenjiong; Hsu, Arthur; Klecka, Michael; Nardi, Aaron; Viens, Daniel

    This paper proposes an optimization framework enabling the integration of multi-scale / multi-physics simulation codes to perform structural optimization design for additively manufactured components. Cold spray was selected as the additive manufacturing (AM) process and its constraints were identified and included in the optimization scheme. The developed framework first utilizes topology optimization to maximize stiffness for conceptual design. The subsequent step applies shape optimization to refine the design for stress-life fatigue. The component weight was reduced by 20% while stresses were reduced by 75% and the rigidity was improved by 37%. The framework and analysis codes were implemented using Altair software as well as an in-house loading code. The optimized design was subsequently produced by the cold spray process.

  17. OPTIMASS: a package for the minimization of kinematic mass functions with constraints

    NASA Astrophysics Data System (ADS)

    Cho, Won Sang; Gainer, James S.; Kim, Doojin; Lim, Sung Hak; Matchev, Konstantin T.; Moortgat, Filip; Pape, Luc; Park, Myeonghun

    2016-01-01

    Reconstructed mass variables, such as M 2, M 2 C , M T * , and M T2 W , play an essential role in searches for new physics at hadron colliders. The calculation of these variables generally involves constrained minimization in a large parameter space, which is numerically challenging. We provide a C++ code, O ptimass, which interfaces with the M inuit library to perform this constrained minimization using the Augmented Lagrangian Method. The code can be applied to arbitrarily general event topologies, thus allowing the user to significantly extend the existing set of kinematic variables. We describe this code, explain its physics motivation, and demonstrate its use in the analysis of the fully leptonic decay of pair-produced top quarks using M 2 variables.

  18. Fast decoder for local quantum codes using Groebner basis

    NASA Astrophysics Data System (ADS)

    Haah, Jeongwan

    2013-03-01

    Based on arXiv:1204.1063. A local translation-invariant quantum code has a description in terms of Laurent polynomials. As an application of this observation, we present a fast decoding algorithm for translation-invariant local quantum codes in any spatial dimensions using the straightforward division algorithm for multivariate polynomials. The running time is O (n log n) on average, or O (n2 log n) on worst cases, where n is the number of physical qubits. The algorithm improves a subroutine of the renormalization-group decoder by Bravyi and Haah (arXiv:1112.3252) in the translation-invariant case. This work is supported in part by the Insitute for Quantum Information and Matter, an NSF Physics Frontier Center, and the Korea Foundation for Advanced Studies.

  19. Status of thermalhydraulic modelling and assessment: Open issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bestion, D.; Barre, F.

    1997-07-01

    This paper presents the status of the physical modelling in present codes used for Nuclear Reactor Thermalhydraulics (TRAC, RELAP 5, CATHARE, ATHLET,...) and attempts to list the unresolved or partially resolved issues. First, the capabilities and limitations of present codes are presented. They are mainly known from a synthesis of the assessment calculations performed for both separate effect tests and integral effect tests. It is also interesting to list all the assumptions and simplifications which were made in the establishment of the system of equations and of the constitutive relations. Many of the present limitations are associated to physical situationsmore » where these assumptions are not valid. Then, recommendations are proposed to extend the capabilities of these codes.« less

  20. Physical and numerical sources of computational inefficiency in integration of chemical kinetic rate equations: Etiology, treatment and prognosis

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.; Radhakrishnan, K.

    1986-01-01

    The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.

  1. POPCORN: A comparison of binary population synthesis codes

    NASA Astrophysics Data System (ADS)

    Claeys, J. S. W.; Toonen, S.; Mennekens, N.

    2013-01-01

    We compare the results of three binary population synthesis codes to understand the differences in their results. As a first result we find that when equalizing the assumptions the results are similar. The main differences arise from deviating physical input.

  2. RELAP-7 Theory Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Ray Alden; Zou, Ling; Zhao, Haihua

    This document summarizes the physical models and mathematical formulations used in the RELAP-7 code. In summary, the MOOSE based RELAP-7 code development is an ongoing effort. The MOOSE framework enables rapid development of the RELAP-7 code. The developmental efforts and results demonstrate that the RELAP-7 project is on a path to success. This theory manual documents the main features implemented into the RELAP-7 code. Because the code is an ongoing development effort, this RELAP-7 Theory Manual will evolve with periodic updates to keep it current with the state of the development, implementation, and model additions/revisions.

  3. Monte Carlo Modeling of the Initial Radiation Emitted by a Nuclear Device in the National Capital Region

    DTIC Science & Technology

    2013-07-01

    also simulated in the models. Data was derived from calculations using the three-dimensional Monte Carlo radiation transport code MCNP (Monte Carlo N...32  B.  MCNP PHYSICS OPTIONS ......................................................................................... 33  C.  HAZUS...input deck’) for the MCNP , Monte Carlo N-Particle, radiation transport code. MCNP is a general-purpose code designed to simulate neutron, photon

  4. Physical Education. Secondary

    ERIC Educational Resources Information Center

    Molosky, Gerald; And Others

    GRADES OR AGES: Grades 7-10. SUBJECT MATTER: Physical education. ORGANIZATION AND PHYSICAL APPEARANCE: The guide is divided into six color-coded units, one each for athletic skills and games, fitness testing and body mechanics, rhythmical activities, simple games and recreational activities, tumbling and apparatus, and swimming. It is mimeographed…

  5. SU-A-210-04: Panel Discussion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stanford, J.

    The purpose of this student annual meeting is to address topics that are becoming more relevant to medical physicists, but are not frequently addressed, especially for students and trainees just entering the field. The talk is divided into two parts: medical billing and regulations. Hsinshun Wu – Why should we learn radiation oncology billing? Many medical physicists do not like to be involved with medical billing or coding during their career. They believe billing is not their responsibility and sometimes they even refuse to participate in the billing process if given the chance. This presentation will talk about a physicist’smore » long career and share his own experience that knowing medical billing is not only important and necessary for every young medical physicist, but that good billing knowledge could provide a valuable contribution to his/her medical physics development. Learning Objectives: The audience will learn the basic definition of Current Procedural Terminology (CPT) codes performed in a Radiation Oncology Department. Understand the differences between hospital coding and physician-based or freestanding coding. Apply proper CPT coding for each Radiation Oncology procedure. Each procedure with its specific CPT code will be discussed in detail. The talk will focus on the process of care and use of actual workflow to understand each CPT code. Example coding of a typical Radiation Oncology procedure. Special procedure coding such as brachytherapy, proton therapy, radiosurgery, and SBRT. Maryann Abogunde – Medical physics opportunities at the Nuclear Regulatory Commission (NRC) The NRC’s responsibilities include the regulation of medical uses of byproduct (radioactive) materials and oversight of medical use end-users (licensees) through a combination of regulatory requirements, licensing, safety oversight including inspection and enforcement, operational experience evaluation, and regulatory support activities. This presentation will explore the career options for medical physicists in the NRC, how the NRC interacts with clinical medical physicists, and a physicist’s experience as a regulator. Learning Objectives: Explore non-clinical career pathways for medical physics students and trainees at the Nuclear Regulatory Commission. Overview of NRC medical applications and medical use regulations. Understand the skills needed for physicists as regulators. Abogunde is funded to attend the meeting by her employer, the NRC.« less

  6. SU-A-210-03: Panel Discussion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodrigues, A.

    The purpose of this student annual meeting is to address topics that are becoming more relevant to medical physicists, but are not frequently addressed, especially for students and trainees just entering the field. The talk is divided into two parts: medical billing and regulations. Hsinshun Wu – Why should we learn radiation oncology billing? Many medical physicists do not like to be involved with medical billing or coding during their career. They believe billing is not their responsibility and sometimes they even refuse to participate in the billing process if given the chance. This presentation will talk about a physicist’smore » long career and share his own experience that knowing medical billing is not only important and necessary for every young medical physicist, but that good billing knowledge could provide a valuable contribution to his/her medical physics development. Learning Objectives: The audience will learn the basic definition of Current Procedural Terminology (CPT) codes performed in a Radiation Oncology Department. Understand the differences between hospital coding and physician-based or freestanding coding. Apply proper CPT coding for each Radiation Oncology procedure. Each procedure with its specific CPT code will be discussed in detail. The talk will focus on the process of care and use of actual workflow to understand each CPT code. Example coding of a typical Radiation Oncology procedure. Special procedure coding such as brachytherapy, proton therapy, radiosurgery, and SBRT. Maryann Abogunde – Medical physics opportunities at the Nuclear Regulatory Commission (NRC) The NRC’s responsibilities include the regulation of medical uses of byproduct (radioactive) materials and oversight of medical use end-users (licensees) through a combination of regulatory requirements, licensing, safety oversight including inspection and enforcement, operational experience evaluation, and regulatory support activities. This presentation will explore the career options for medical physicists in the NRC, how the NRC interacts with clinical medical physicists, and a physicist’s experience as a regulator. Learning Objectives: Explore non-clinical career pathways for medical physics students and trainees at the Nuclear Regulatory Commission. Overview of NRC medical applications and medical use regulations. Understand the skills needed for physicists as regulators. Abogunde is funded to attend the meeting by her employer, the NRC.« less

  7. Generation of ramp waves using variable areal density flyers

    NASA Astrophysics Data System (ADS)

    Winter, R. E.; Cotton, M.; Harris, E. J.; Chapman, D. J.; Eakins, D.

    2016-07-01

    Ramp loading using graded density impactors as flyers in gas-gun-driven plate impact experiments can yield new and useful information about the equation of state and the strength properties of the loaded material. Selective Laser Melting, an additive manufacturing technique, was used to manufacture a graded density flyer, termed the "bed-of-nails" (BON). A 2.5-mm-thick × 99.4-mm-diameter solid disc of stainless steel formed a base for an array of tapered spikes of length 5.5 mm and spaced 1 mm apart. The two experiments to test the concept were performed at impact velocities of 900 and 1100 m/s using the 100-mm gas gun at the Institute of Shock Physics at Imperial College London. In each experiment, a BON flyer was impacted onto a copper buffer plate which helped to smooth out perturbations in the wave profile. The ramp delivered to the copper buffer was in turn transmitted to three tantalum targets of thicknesses 3, 5 and 7 mm, which were mounted in contact with the back face of the copper. Heterodyne velocimetry (Het-V) was used to measure the velocity-time history, at the back faces of the tantalum discs. The wave profiles display a smooth increase in velocity over a period of ˜ 2.5 μs, with no indication of a shock jump. The measured profiles have been analysed to generate a stress vs. volume curve for tantalum. The results have been compared with the predictions of the Sandia National Laboratories hydrocode, CTH.

  8. An Experiment in Scientific Code Semantic Analysis

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.

    1998-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, distributed expert parsers. These semantic parser are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. The parsers will automatically recognize and document some static, semantic concepts and locate some program semantic errors. Results are shown for a subroutine test case and a collection of combustion code routines. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.

  9. Using Long-term Satellite Observations to Identify Sensitive Regimes and Active Regions of Aerosol Indirect Effects for Liquid Clouds over Global Oceans

    DOE PAGES

    Zhao, Xuepeng; Liu, Yangang; Yu, Fangquan; ...

    2017-11-16

    Long-term (1981-2011) satellite climate data records (CDRs) of clouds and aerosols are used to investigate the aerosol-cloud interaction of marine water cloud from a climatology perspective. Our focus is on identifying the regimes and regions where the aerosol indirect effect (AIE) are evident in long-term averages over the global oceans through analyzing the correlation features between aerosol loading and the key cloud variables including cloud droplet effective radius (CDER), cloud optical depth (COD), cloud water path (CWP), cloud top height (CTH), and cloud top temperature (CTT). An aerosol optical thickness (AOT) range of 0.13 < AOT < 0.3 is identifiedmore » as the sensitive regime of the conventional first AIE where CDER is more susceptible to AOT than the other cloud variables. The first AIE that manifests as the change of long-term averaged CDER appears only in limited oceanic regions. The signature of aerosol invigoration of water clouds as revealed by the increase of cloud cover fraction (CCF) and CTH with increasing AOT at the middle/high latitudes of both hemispheres is identified for a pristine atmosphere (AOT < 0.08). Aerosol invigoration signature is also revealed by the concurrent increase of CDER, COD, and CWP with increasing AOT for a polluted marine atmosphere (AOT > 0.3) in the tropical convergence zones. The regions where the second AIE is likely to manifest in the CCF change are limited to several oceanic areas with high CCF of the warm water clouds near the western coasts of continents. The second AIE signature as represented by the reduction of the precipitation efficiency with increasing AOT is more likely to be observed in the AOT regime of 0.08 < AOT < 0.4. The corresponding AIE active regions manifested themselves as the decline of the precipitation efficiency are mainly limited to the oceanic areas downwind of continental aerosols. Furthermore, the sensitive regime of the conventional AIE identified in this observational study is likely associated with the transitional regime from the aerosol-limited regime to the updraft-limited regime identified for aerosol-cloud interaction in cloud model simulations.« less

  10. Modeling momentum transfer by the DART spacecraft into the moon of Didymos

    NASA Astrophysics Data System (ADS)

    Stickle, Angela M.; Atchison, Justin A.; Barnouin, Olivier S.; Cheng, Andy F.; Ernst, Carolyn M.; Richardson, Derek C.; Rivkin, Andy S.

    2015-11-01

    The Asteroid Impact and Deflection Assessment (AIDA) mission is a joint concept between NASA and ESA designed to test the effectiveness of a kinetic impactor in deflecting an asteroid. The mission is composed of two independent, but mutually supportive, components: the NASA-led Double Asteroid Redirect Test (DART), and the ESA-led Asteroid Impact Monitoring (AIM) mission. The spacecraft will be sent to the near-Earth binary asteroid 65803 Didymos, which makes unusually close approaches to Earth in 2022 and 2024. These close approaches make it an ideal target for a kinetic impactor asteroid deflection demonstration, as it will be easily observable from Earth-based observatories. The ~2 m3, 300 kg DART spacecraft will impact the moon of the binary system at 6.25 km/s. The deflection of the moon will then be determined by the orbiting AIM spacecraft and from ground-based observations by measuring the change in the moon’s orbital period. A modeling study supporting this mission concept was performed to determine the expected momentum transfer to the moon following impact. The combination of CTH hydrocode models, analytical scaling predictions, and N-body pkdgrav simulations helps to constrain the expected results of the kinetic impactor experiment.To better understand the large parameter space (including material strength, porosity, impact location and angle), simulations of the DART impact were performed using the CTH hydrocode. The resultant crater size, velocity imparted to the moon, and momentum transfer were calculated for all cases. For “realistic” asteroid types, simulated DART impacts produce craters with diameters on the order of 10 m, an imparted Δv of 0.5-2 mm/s and a dimensionless momentum enhancement (“beta factor”) of 1.07-5 for targets ranging from a highly porous aggregate to a fully dense rock. These results generally agree with predictions from theoretical and analytical studies. Following impact, pkdgrav simulations of the system evolution track changes in the orbital period of the moon and examine the effects of the shapes of Didymos and its moon on the deflection. These simulations indicate that the shapes of the bodies can influence the subsequent dynamics of the moon.

  11. Government control over health-related not-for-profit organisations: Agency for International Development v. Alliance for Open Society International Inc 570 US_(2013).

    PubMed

    Vines, Tim; Donohoo, Angus M; Faunce, Thomas

    2013-12-01

    The relationship between government and the not-for-profit (NFP) sector has important implications for society, especially in relation to the delivery of public health measures and the protection of the environment. In key health-related areas such as provision of medical services, welfare, foreign aid and education, governments have traditionally preferred for the NFP sector to act as service partners, with the relationship mediated through grants or funding agreements. This service delivery arrangement is intended to provide a diversity of voices, and encourage volunteerism and altruism, in conjunction with the purposes and objectives of the relevant NGO. Under the pretence of "accountability", however, governments increasingly are seeking to impose intrusive conditions on grantees, which limit their ability to fulfil their mission and advocate on behalf of their constituents. This column examines the United States Supreme Court decision, Agency for International Development v Alliance for Open Society International Inc 570 US_(2013), and compares it to the removal of gag clauses in Australian federal funding rules. Recent national changes to the health-related NFP sector in Australia are then discussed, such as those found in the Charities Act 2013 (Cth) and the Not-for-Profit Sector Freedom to Advocate Act 2013 (Cth). These respectively include the establishment of the Australian Charities and Not-For-Profit Commission, the modernising of the definition of "charity" and statutory blocks on "gag" clauses. This analysis concludes with a survey of recent moves by Australian States to impose new restrictions on the ability of health-related NFPs to lobby against harmful government policy Among the responses considered is the protection afforded by s 51l(xxiiiA) of the Australian Constitution. This constitutional guarantee appears to have been focused historically on preventing medical and dental practitioners and related small businesses being practically coerced into government or large-scale private corporate operations. As such, it may prohibit civil conscription arising not only from "gag clauses" in managed care contracts, but also from "gag clauses" in governmental ideological controls over taxpayer-funded, health-related NFPs.

  12. EPIC/DSCOVR's Oxygen Absorption Channels: A Cloud Profiling Information Content Analysis

    NASA Astrophysics Data System (ADS)

    Davis, A. B.; Merlin, G.; Labonnote, L. C.; Cornet, C.; Dubuisson, P.; Ferlay, N.; Parol, F.; Riedi, J.; Yang, Y.

    2016-12-01

    EPIC/DSCOVR has several spectral channels dedicated to cloud characterization, most notably O2 A- and B-band. Differential optical absorption spectroscopy (DOAS) ratios of in-band and reference channels are less prone to calibration error than the 4 individual signals. Using these ratios, we have replicated for mono-directional (quasi-backscattering) EPIC observations the recent cloud information content analysis by Merlin et al. (AMT-D,8:12709-12758,2015) that was focused on A-band-only but multi-angle observations by POLDER in the past, by AirMSPI in the present, and by 3MI and MAIA in the future. The methodology is based on extensive forward 1D radiative transfer (RT) computations using the ARTDECO model that implements a k-distribution technique for the absorbing (in-band) channels. These synthetic signals are combined into a Bayesian Rodgers-type framework for estimating posterior uncertainty on retrieved quantities. Recall that this formalism calls explicitly for: (1) estimates of instrument error, and (2) prior uncertainty on the retrieved quantities, to which we add (3) reasonable estimates of uncertainty in the non- or otherwise-retrieved properties. Wide ranges of cloud top heights (CTHs) and cloud geometrical thicknesses (CGTs) are examined for a representative selection of cloud optical thicknesses (COTs), solar angles, and surface reflectances. We found that CTH should be reliably retrieved from EPIC data under most circumstances as long as COT can be inferred from non-absorbing channels, and the bias from in-cloud absorption is removed. However, CGT will be hard to determine unless CTH is constrained by independent means. EPIC has several UV channels that could be brought to bear. These findings conflict those of Yang et al. (JQSRT,122:141-149,2013), so we also revisit that more preliminary study that did not account for a realistic level of residual instrument noise in the DOAS ratios. In conclusion, we believe that the present information content analysis will inform the EPIC/DSCOVR Level 2 algorithm development team about what cloud properties to target using the A/B-band channels, depending on the availability of other cloud information.

  13. Development of a core Clostridium thermocellum kinetic metabolic model consistent with multiple genetic perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dash, Satyakam; Khodayari, Ali; Zhou, Jilai

    Background. Clostridium thermocellum is a Gram-positive anaerobe with the ability to hydrolyze and metabolize cellulose into biofuels such as ethanol, making it an attractive candidate for consolidated bioprocessing (CBP). At present, metabolic engineering in C. thermocellum is hindered due to the incomplete description of its metabolic repertoire and regulation within a predictive metabolic model. Genome-scale metabolic (GSM) models augmented with kinetic models of metabolism have been shown to be effective at recapitulating perturbed metabolic phenotypes. Results. In this effort, we first update a second-generation genome-scale metabolic model (iCth446) for C. thermocellum by correcting cofactor dependencies, restoring elemental and charge balances,more » and updating GAM and NGAM values to improve phenotype predictions. The iCth446 model is next used as a scaffold to develop a core kinetic model (k-ctherm118) of the C. thermocellum central metabolism using the Ensemble Modeling (EM) paradigm. Model parameterization is carried out by simultaneously imposing fermentation yield data in lactate, malate, acetate, and hydrogen production pathways for 19 measured metabolites spanning a library of 19 distinct single and multiple gene knockout mutants along with 18 intracellular metabolite concentration data for a Δgldh mutant and ten experimentally measured Michaelis–Menten kinetic parameters. Conclusions. The k-ctherm118 model captures significant metabolic changes caused by (1) nitrogen limitation leading to increased yields for lactate, pyruvate, and amino acids, and (2) ethanol stress causing an increase in intracellular sugar phosphate concentrations (~1.5-fold) due to upregulation of cofactor pools. Robustness analysis of k-ctherm118 alludes to the presence of a secondary activity of ketol-acid reductoisomerase and possible regulation by valine and/or leucine pool levels. In addition, cross-validation and robustness analysis allude to missing elements in k-ctherm118 and suggest additional experiments to improve kinetic model prediction fidelity. Overall, the study quantitatively assesses the advantages of EM-based kinetic modeling towards improved prediction of C. thermocellum metabolism and develops a predictive kinetic model which can be used to design biofuel-overproducing strains.« less

  14. Using Long-Term Satellite Observations to Identify Sensitive Regimes and Active Regions of Aerosol Indirect Effects for Liquid Clouds Over Global Oceans

    NASA Astrophysics Data System (ADS)

    Zhao, Xuepeng; Liu, Yangang; Yu, Fangquan; Heidinger, Andrew K.

    2018-01-01

    Long-term (1981-2011) satellite climate data records of clouds and aerosols are used to investigate the aerosol-cloud interaction of marine water cloud from a climatology perspective. Our focus is on identifying the regimes and regions where the aerosol indirect effects (AIEs) are evident in long-term averages over the global oceans through analyzing the correlation features between aerosol loading and the key cloud variables including cloud droplet effective radius (CDER), cloud optical depth (COD), cloud water path (CWP), cloud top height (CTH), and cloud top temperature (CTT). An aerosol optical thickness (AOT) range of 0.13 < AOT < 0.3 is identified as the sensitive regime of the conventional first AIE where CDER is more susceptible to AOT than the other cloud variables. The first AIE that manifests as the change of long-term averaged CDER appears only in limited oceanic regions. The signature of aerosol invigoration of water clouds as revealed by the increase of cloud cover fraction (CCF) and CTH with increasing AOT at the middle/high latitudes of both hemispheres is identified for a pristine atmosphere (AOT < 0.08). Aerosol invigoration signature is also revealed by the concurrent increase of CDER, COD, and CWP with increasing AOT for a polluted marine atmosphere (AOT > 0.3) in the tropical convergence zones. The regions where the second AIE is likely to manifest in the CCF change are limited to several oceanic areas with high CCF of the warm water clouds near the western coasts of continents. The second AIE signature as represented by the reduction of the precipitation efficiency with increasing AOT is more likely to be observed in the AOT regime of 0.08 < AOT < 0.4. The corresponding AIE active regions manifested themselves as the decline of the precipitation efficiency are mainly limited to the oceanic areas downwind of continental aerosols. The sensitive regime of the conventional AIE identified in this observational study is likely associated with the transitional regime from the aerosol-limited regime to the updraft-limited regime identified for aerosol-cloud interaction in cloud model simulations.

  15. Using Long-Term Satellite Observations to Identify Sensitive Regimes and Active Regions of Aerosol Indirect Effects for Liquid Clouds Over Global Oceans.

    PubMed

    Zhao, Xuepeng; Liu, Yangang; Yu, Fangquan; Heidinger, Andrew K

    2018-01-16

    Long-term (1981-2011) satellite climate data records of clouds and aerosols are used to investigate the aerosol-cloud interaction of marine water cloud from a climatology perspective. Our focus is on identifying the regimes and regions where the aerosol indirect effects (AIEs) are evident in long-term averages over the global oceans through analyzing the correlation features between aerosol loading and the key cloud variables including cloud droplet effective radius (CDER), cloud optical depth (COD), cloud water path (CWP), cloud top height (CTH), and cloud top temperature (CTT). An aerosol optical thickness (AOT) range of 0.13 < AOT < 0.3 is identified as the sensitive regime of the conventional first AIE where CDER is more susceptible to AOT than the other cloud variables. The first AIE that manifests as the change of long-term averaged CDER appears only in limited oceanic regions. The signature of aerosol invigoration of water clouds as revealed by the increase of cloud cover fraction (CCF) and CTH with increasing AOT at the middle/high latitudes of both hemispheres is identified for a pristine atmosphere (AOT < 0.08). Aerosol invigoration signature is also revealed by the concurrent increase of CDER, COD, and CWP with increasing AOT for a polluted marine atmosphere (AOT > 0.3) in the tropical convergence zones. The regions where the second AIE is likely to manifest in the CCF change are limited to several oceanic areas with high CCF of the warm water clouds near the western coasts of continents. The second AIE signature as represented by the reduction of the precipitation efficiency with increasing AOT is more likely to be observed in the AOT regime of 0.08 < AOT < 0.4. The corresponding AIE active regions manifested themselves as the decline of the precipitation efficiency are mainly limited to the oceanic areas downwind of continental aerosols. The sensitive regime of the conventional AIE identified in this observational study is likely associated with the transitional regime from the aerosol-limited regime to the updraft-limited regime identified for aerosol-cloud interaction in cloud model simulations.

  16. Using Long-term Satellite Observations to Identify Sensitive Regimes and Active Regions of Aerosol Indirect Effects for Liquid Clouds over Global Oceans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Xuepeng; Liu, Yangang; Yu, Fangquan

    Long-term (1981-2011) satellite climate data records (CDRs) of clouds and aerosols are used to investigate the aerosol-cloud interaction of marine water cloud from a climatology perspective. Our focus is on identifying the regimes and regions where the aerosol indirect effect (AIE) are evident in long-term averages over the global oceans through analyzing the correlation features between aerosol loading and the key cloud variables including cloud droplet effective radius (CDER), cloud optical depth (COD), cloud water path (CWP), cloud top height (CTH), and cloud top temperature (CTT). An aerosol optical thickness (AOT) range of 0.13 < AOT < 0.3 is identifiedmore » as the sensitive regime of the conventional first AIE where CDER is more susceptible to AOT than the other cloud variables. The first AIE that manifests as the change of long-term averaged CDER appears only in limited oceanic regions. The signature of aerosol invigoration of water clouds as revealed by the increase of cloud cover fraction (CCF) and CTH with increasing AOT at the middle/high latitudes of both hemispheres is identified for a pristine atmosphere (AOT < 0.08). Aerosol invigoration signature is also revealed by the concurrent increase of CDER, COD, and CWP with increasing AOT for a polluted marine atmosphere (AOT > 0.3) in the tropical convergence zones. The regions where the second AIE is likely to manifest in the CCF change are limited to several oceanic areas with high CCF of the warm water clouds near the western coasts of continents. The second AIE signature as represented by the reduction of the precipitation efficiency with increasing AOT is more likely to be observed in the AOT regime of 0.08 < AOT < 0.4. The corresponding AIE active regions manifested themselves as the decline of the precipitation efficiency are mainly limited to the oceanic areas downwind of continental aerosols. Furthermore, the sensitive regime of the conventional AIE identified in this observational study is likely associated with the transitional regime from the aerosol-limited regime to the updraft-limited regime identified for aerosol-cloud interaction in cloud model simulations.« less

  17. Development of a core Clostridium thermocellum kinetic metabolic model consistent with multiple genetic perturbations

    DOE PAGES

    Dash, Satyakam; Khodayari, Ali; Zhou, Jilai; ...

    2017-05-02

    Background. Clostridium thermocellum is a Gram-positive anaerobe with the ability to hydrolyze and metabolize cellulose into biofuels such as ethanol, making it an attractive candidate for consolidated bioprocessing (CBP). At present, metabolic engineering in C. thermocellum is hindered due to the incomplete description of its metabolic repertoire and regulation within a predictive metabolic model. Genome-scale metabolic (GSM) models augmented with kinetic models of metabolism have been shown to be effective at recapitulating perturbed metabolic phenotypes. Results. In this effort, we first update a second-generation genome-scale metabolic model (iCth446) for C. thermocellum by correcting cofactor dependencies, restoring elemental and charge balances,more » and updating GAM and NGAM values to improve phenotype predictions. The iCth446 model is next used as a scaffold to develop a core kinetic model (k-ctherm118) of the C. thermocellum central metabolism using the Ensemble Modeling (EM) paradigm. Model parameterization is carried out by simultaneously imposing fermentation yield data in lactate, malate, acetate, and hydrogen production pathways for 19 measured metabolites spanning a library of 19 distinct single and multiple gene knockout mutants along with 18 intracellular metabolite concentration data for a Δgldh mutant and ten experimentally measured Michaelis–Menten kinetic parameters. Conclusions. The k-ctherm118 model captures significant metabolic changes caused by (1) nitrogen limitation leading to increased yields for lactate, pyruvate, and amino acids, and (2) ethanol stress causing an increase in intracellular sugar phosphate concentrations (~1.5-fold) due to upregulation of cofactor pools. Robustness analysis of k-ctherm118 alludes to the presence of a secondary activity of ketol-acid reductoisomerase and possible regulation by valine and/or leucine pool levels. In addition, cross-validation and robustness analysis allude to missing elements in k-ctherm118 and suggest additional experiments to improve kinetic model prediction fidelity. Overall, the study quantitatively assesses the advantages of EM-based kinetic modeling towards improved prediction of C. thermocellum metabolism and develops a predictive kinetic model which can be used to design biofuel-overproducing strains.« less

  18. Relaunch of the Interactive Plasma Physics Educational Experience (IPPEX)

    NASA Astrophysics Data System (ADS)

    Dominguez, A.; Rusaitis, L.; Zwicker, A.; Stotler, D. P.

    2015-11-01

    In the late 1990's PPPL's Science Education Department developed an innovative online site called the Interactive Plasma Physics Educational Experience (IPPEX). It featured (among other modules) two Java based applications which simulated tokamak physics: A steady state tokamak (SST) and a time dependent tokamak (TDT). The physics underlying the SST and the TDT are based on the ASPECT code which is a global power balance code developed to evaluate the performance of fusion reactor designs. We have relaunched the IPPEX site with updated modules and functionalities: The site itself is now dynamic on all platforms. The graphic design of the site has been modified to current standards. The virtual tokamak programming has been redone in Javascript, taking advantage of the speed and compactness of the code. The GUI of the tokamak has been completely redesigned, including more intuitive representations of changes in the plasma, e.g., particles moving along magnetic field lines. The use of GPU accelerated computation provides accurate and smooth visual representations of the plasma. We will present the current version of IPPEX as well near term plans of incorporating real time NSTX-U data into the simulation.

  19. [INVITED] Luminescent QR codes for smart labelling and sensing

    NASA Astrophysics Data System (ADS)

    Ramalho, João F. C. B.; António, L. C. F.; Correia, S. F. H.; Fu, L. S.; Pinho, A. S.; Brites, C. D. S.; Carlos, L. D.; André, P. S.; Ferreira, R. A. S.

    2018-05-01

    QR (Quick Response) codes are two-dimensional barcodes composed of special geometric patterns of black modules in a white square background that can encode different types of information with high density and robustness, correct errors and physical damages, thus keeping the stored information protected. Recently, these codes have gained increased attention as they offer a simple physical tool for quick access to Web sites for advertising and social interaction. Challenges encompass the increase of the storage capacity limit, even though they can store approximately 350 times more information than common barcodes, and encode different types of characters (e.g., numeric, alphanumeric, kanji and kana). In this work, we fabricate luminescent QR codes based on a poly(methyl methacrylate) substrate coated with organic-inorganic hybrid materials doped with trivalent terbium (Tb3+) and europium (Eu3+) ions, demonstrating the increase of storage capacity per unit area by a factor of two by using the colour multiplexing, when compared to conventional QR codes. A novel methodology to decode the multiplexed QR codes is developed based on a colour separation threshold where a decision level is calculated through a maximum-likelihood criteria to minimize the error probability of the demultiplexed modules, maximizing the foreseen total storage capacity. Moreover, the thermal dependence of the emission colour coordinates of the Eu3+/Tb3+-based hybrids enables the simultaneously QR code colour-multiplexing and may be used to sense temperature (reproducibility higher than 93%), opening new fields of applications for QR codes as smart labels for sensing.

  20. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    NASA Astrophysics Data System (ADS)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  1. Student Use of Physics to Make Sense of Incomplete but Functional VPython Programs in a Lab Setting

    NASA Astrophysics Data System (ADS)

    Weatherford, Shawn A.

    2011-12-01

    Computational activities in Matter & Interactions, an introductory calculus-based physics course, have the instructional goal of providing students with the experience of applying the same set of a small number of fundamental principles to model a wide range of physical systems. However there are significant instructional challenges for students to build computer programs under limited time constraints, especially for students who are unfamiliar with programming languages and concepts. Prior attempts at designing effective computational activities were successful at having students ultimately build working VPython programs under the tutelage of experienced teaching assistants in a studio lab setting. A pilot study revealed that students who completed these computational activities had significant difficultly repeating the exact same tasks and further, had difficulty predicting the animation that would be produced by the example program after interpreting the program code. This study explores the interpretation and prediction tasks as part of an instructional sequence where students are asked to read and comprehend a functional, but incomplete program. Rather than asking students to begin their computational tasks with modifying program code, we explicitly ask students to interpret an existing program that is missing key lines of code. The missing lines of code correspond to the algebraic form of fundamental physics principles or the calculation of forces which would exist between analogous physical objects in the natural world. Students are then asked to draw a prediction of what they would see in the simulation produced by the VPython program and ultimately run the program to evaluate the students' prediction. This study specifically looks at how the participants use physics while interpreting the program code and creating a whiteboard prediction. This study also examines how students evaluate their understanding of the program and modification goals at the beginning of the modification task. While working in groups over the course of a semester, study participants were recorded while they completed three activities using these incomplete programs. Analysis of the video data showed that study participants had little difficulty interpreting physics quantities, generating a prediction, or determining how to modify the incomplete program. Participants did not base their prediction solely from the information from the incomplete program. When participants tried to predict the motion of the objects in the simulation, many turned to their knowledge of how the system would evolve if it represented an analogous real-world physical system. For example, participants attributed the real-world behavior of springs to helix objects even though the program did not include calculations for the spring to exert a force when stretched. Participants rarely interpreted lines of code in the computational loop during the first computational activity, but this changed during latter computational activities with most participants using their physics knowledge to interpret the computational loop. Computational activities in the Matter & Interactions curriculum were revised in light of these findings to include an instructional sequence of tasks to build a comprehension of the example program. The modified activities also ask students to create an additional whiteboard prediction for the time-evolution of the real-world phenomena which the example program will eventually model. This thesis shows how comprehension tasks identified by Palinscar and Brown (1984) as effective in improving reading comprehension are also effective in helping students apply their physics knowledge to interpret a computer program which attempts to model a real-world phenomena and identify errors in their understanding of the use, or omission, of fundamental physics principles in a computational model.

  2. A Methodology for Optimizing the Training and Utilization of Physical Therapy Personnel.

    ERIC Educational Resources Information Center

    Dumas, Neil S.; Muthard, John E.

    A method for analyzing the work in a department of physical therapy was devised and applied in a teaching hospital. Physical therapists, trained as observer-investigators, helped refine the coding system and were able to reliably record job behavior in the physical therapy department. The nature of the therapist's and aide's job was described and…

  3. 7 CFR 3560.625 - Maintaining the physical asset.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 15 2014-01-01 2014-01-01 false Maintaining the physical asset. 3560.625 Section 3560.625 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE... Maintaining the physical asset. On-farm labor housing must meet state and local building and occupancy codes. ...

  4. 7 CFR 3560.625 - Maintaining the physical asset.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 15 2011-01-01 2011-01-01 false Maintaining the physical asset. 3560.625 Section 3560.625 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE... Maintaining the physical asset. On-farm labor housing must meet state and local building and occupancy codes. ...

  5. 7 CFR 3560.625 - Maintaining the physical asset.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 15 2013-01-01 2013-01-01 false Maintaining the physical asset. 3560.625 Section 3560.625 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE... Maintaining the physical asset. On-farm labor housing must meet state and local building and occupancy codes. ...

  6. 7 CFR 3560.625 - Maintaining the physical asset.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 15 2012-01-01 2012-01-01 false Maintaining the physical asset. 3560.625 Section 3560.625 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE... Maintaining the physical asset. On-farm labor housing must meet state and local building and occupancy codes. ...

  7. 7 CFR 3560.625 - Maintaining the physical asset.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Maintaining the physical asset. 3560.625 Section 3560.625 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE... Maintaining the physical asset. On-farm labor housing must meet state and local building and occupancy codes. ...

  8. Cyclotron resonant scattering feature simulations. II. Description of the CRSF simulation process

    NASA Astrophysics Data System (ADS)

    Schwarm, F.-W.; Ballhausen, R.; Falkner, S.; Schönherr, G.; Pottschmidt, K.; Wolff, M. T.; Becker, P. A.; Fürst, F.; Marcu-Cheatham, D. M.; Hemphill, P. B.; Sokolova-Lapa, E.; Dauser, T.; Klochkov, D.; Ferrigno, C.; Wilms, J.

    2017-05-01

    Context. Cyclotron resonant scattering features (CRSFs) are formed by scattering of X-ray photons off quantized plasma electrons in the strong magnetic field (of the order 1012 G) close to the surface of an accreting X-ray pulsar. Due to the complex scattering cross-sections, the line profiles of CRSFs cannot be described by an analytic expression. Numerical methods, such as Monte Carlo (MC) simulations of the scattering processes, are required in order to predict precise line shapes for a given physical setup, which can be compared to observations to gain information about the underlying physics in these systems. Aims: A versatile simulation code is needed for the generation of synthetic cyclotron lines. Sophisticated geometries should be investigatable by making their simulation possible for the first time. Methods: The simulation utilizes the mean free path tables described in the first paper of this series for the fast interpolation of propagation lengths. The code is parallelized to make the very time-consuming simulations possible on convenient time scales. Furthermore, it can generate responses to monoenergetic photon injections, producing Green's functions, which can be used later to generate spectra for arbitrary continua. Results: We develop a new simulation code to generate synthetic cyclotron lines for complex scenarios, allowing for unprecedented physical interpretation of the observed data. An associated XSPEC model implementation is used to fit synthetic line profiles to NuSTAR data of Cep X-4. The code has been developed with the main goal of overcoming previous geometrical constraints in MC simulations of CRSFs. By applying this code also to more simple, classic geometries used in previous works, we furthermore address issues of code verification and cross-comparison of various models. The XSPEC model and the Green's function tables are available online (see link in footnote, page 1).

  9. A Web 2.0 Interface to Ion Stopping Power and Other Physics Routines for High Energy Density Physics Applications

    NASA Astrophysics Data System (ADS)

    Stoltz, Peter; Veitzer, Seth

    2008-04-01

    We present a new Web 2.0-based interface to physics routines for High Energy Density Physics applications. These routines include models for ion stopping power, sputtering, secondary electron yields and energies, impact ionization cross sections, and atomic radiated power. The Web 2.0 interface allows users to easily explore the results of the models before using the routines within other codes or to analyze experimental results. We discuss how we used various Web 2.0 tools, including the Python 2.5, Django, and the Yahoo User Interface library. Finally, we demonstrate the interface by showing as an example the stopping power algorithms researchers are currently using within the Hydra code to analyze warm, dense matter experiments underway at the Neutralized Drift Compression Experiment facility at Lawrence Berkeley National Laboratory.

  10. Barriers to success: physical separation optimizes event-file retrieval in shared workspaces.

    PubMed

    Klempova, Bibiana; Liepelt, Roman

    2017-07-08

    Sharing tasks with other persons can simplify our work and life, but seeing and hearing other people's actions may also be very distracting. The joint Simon effect (JSE) is a standard measure of referential response coding when two persons share a Simon task. Sequential modulations of the joint Simon effect (smJSE) are interpreted as a measure of event-file processing containing stimulus information, response information and information about the just relevant control-state active in a given social situation. This study tested effects of physical (Experiment 1) and virtual (Experiment 2) separation of shared workspaces on referential coding and event-file processing using a joint Simon task. In Experiment 1, participants performed this task in individual (go-nogo), joint and standard Simon task conditions with and without a transparent curtain (physical separation) placed along the imagined vertical midline of the monitor. In Experiment 2, participants performed the same tasks with and without receiving background music (virtual separation). For response times, physical separation enhanced event-file retrieval indicated by an enlarged smJSE in the joint Simon task with curtain than without curtain (Experiment1), but did not change referential response coding. In line with this, we also found evidence for enhanced event-file processing through physical separation in the joint Simon task for error rates. Virtual separation did neither impact event-file processing, nor referential coding, but generally slowed down response times in the joint Simon task. For errors, virtual separation hampered event-file processing in the joint Simon task. For the cognitively more demanding standard two-choice Simon task, we found music to have a degrading effect on event-file retrieval for response times. Our findings suggest that adding a physical separation optimizes event-file processing in shared workspaces, while music seems to lead to a more relaxed task processing mode under shared task conditions. In addition, music had an interfering impact on joint error processing and more generally when dealing with a more complex task in isolation.

  11. Assessing an Effort to Promote Safe Parks, Streets and Schools in Washington Heights/Inwood: Assessing Urban Infrastructure Conditions as Determinants of Physical Activity. Program Results

    ERIC Educational Resources Information Center

    Nakashian, Mary

    2008-01-01

    Researchers from the Mailman School of Public Health at Columbia University prepared a case study of CODES (Community Outreach and Development Efforts Save). CODES is a coalition of 35 people and organizations in northern Manhattan committed to promoting safe streets, parks and schools. The case study analyzed the factors that prompted CODES'…

  12. Rapid Prediction of Unsteady Three-Dimensional Viscous Flows in Turbopump Geometries

    NASA Technical Reports Server (NTRS)

    Dorney, Daniel J.

    1998-01-01

    A program is underway to improve the efficiency of a three-dimensional Navier-Stokes code and generalize it for nozzle and turbopump geometries. Code modifications have included the implementation of parallel processing software, incorporation of new physical models and generalization of the multiblock capability. The final report contains details of code modifications, numerical results for several nozzle and turbopump geometries, and the implementation of the parallelization software.

  13. Nonambipolar Transport and Torque in Perturbed Equilibria

    NASA Astrophysics Data System (ADS)

    Logan, N. C.; Park, J.-K.; Wang, Z. R.; Berkery, J. W.; Kim, K.; Menard, J. E.

    2013-10-01

    A new Perturbed Equilibrium Nonambipolar Transport (PENT) code has been developed to calculate the neoclassical toroidal torque from radial current composed of both passing and trapped particles in perturbed equilibria. This presentation outlines the physics approach used in the development of the PENT code, with emphasis on the effects of retaining general aspect-ratio geometric effects. First, nonambipolar transport coefficients and corresponding neoclassical toroidal viscous (NTV) torque in perturbed equilibria are re-derived from the first order gyro-drift-kinetic equation in the ``combined-NTV'' PENT formalism. The equivalence of NTV torque and change in potential energy due to kinetic effects [J-K. Park, Phys. Plas., 2011] is then used to showcase computational challenges shared between PENT and stability codes MISK and MARS-K. Extensive comparisons to a reduced model, which makes numerous large aspect ratio approximations, are used throughout to emphasize geometry dependent physics such as pitch angle resonances. These applications make extensive use of the PENT code's native interfacing with the Ideal Perturbed Equilibrium Code (IPEC), and the combination of these codes is a key step towards an iterative solver for self-consistent perturbed equilibrium torque. Supported by US DOE contract #DE-AC02-09CH11466 and the DOE Office of Science Graduate Fellowship administered by the Oak Ridge Institute for Science & Education under contract #DE-AC05-06OR23100.

  14. Spacecraft-plasma interaction codes: NASCAP/GEO, NASCAP/LEO, POLAR, DynaPAC, and EPSAT

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Jongeward, G. A.; Cooke, D. L.

    1992-01-01

    Development of a computer code to simulate interactions between the surfaces of a geometrically complex spacecraft and the space plasma environment involves: (1) defining the relevant physical phenomena and formulating them in appropriate levels of approximation; (2) defining a representation for the 3-D space external to the spacecraft and a means for defining the spacecraft surface geometry and embedding it in the surrounding space; (3) packaging the code so that it is easy and practical to use, interpret, and present the results; and (4) validating the code by continual comparison with theoretical models, ground test data, and spaceflight experiments. The physical content, geometrical capabilities, and application of five S-CUBED developed spacecraft plasma interaction codes are discussed. The NASA Charging Analyzer Program/geosynchronous earth orbit (NASCAP/GEO) is used to illustrate the role of electrostatic barrier formation in daylight spacecraft charging. NASCAP/low Earth orbit (LEO) applications to the CHARGE-2 and Space Power Experiment Aboard Rockets (SPEAR)-1 rocket payloads are shown. DynaPAC application to the SPEAR-2 rocket payloads is described. Environment Power System Analysis Tool (EPSAT) is illustrated by application to Tethered Satellite System 1 (TSS-1), SPEAR-3, and Sundance. A detailed description and application of the Potentials of Large Objects in the Auroral Region (POLAR) Code are presented.

  15. Edge-diffraction effects in RCS predictions and their importance in systems analysis

    NASA Astrophysics Data System (ADS)

    Friess, W. F.; Klement, D.; Ruppel, M.; Stein, Volker

    1996-06-01

    In developing RCS prediction codes a variety of physical effects such as the edge diffraction effect have to be considered with the consequence that the computer effort increases considerably. This fact limits the field of application of such codes, especially if the RCS data serve as input parameters for system simulators which very often need these data for a high number of observation angles and/or frequencies. Vice versa the issues of a system analysis can be used to estimate the relevance of physical effects under system viewpoints and to rank them according to their magnitude. This paper tries to evaluate the importance of RCS predictions containing an edge diffracted field for systems analysis. A double dihedral with a strong depolarizing behavior and a generic airplane design containing many arbitrarily oriented edges are used as test structures. Data of the scattered field are generated by the RCS computer code SIGMA with and without including edge diffraction effects. These data are submitted to the code DORA to determine radar range and radar detectibility and to a SAR simulator code to generate SAR imagery. In both cases special scenarios are assumed. The essential features of the computer codes in their current state are described, the results are presented and discussed under systems viewpoints.

  16. Physical-layer network coding for passive optical interconnect in datacenter networks.

    PubMed

    Lin, Rui; Cheng, Yuxin; Guan, Xun; Tang, Ming; Liu, Deming; Chan, Chun-Kit; Chen, Jiajia

    2017-07-24

    We introduce physical-layer network coding (PLNC) technique in a passive optical interconnect (POI) architecture for datacenter networks. The implementation of the PLNC in the POI at 2.5 Gb/s and 10Gb/s have been experimentally validated while the gains in terms of network layer performances have been investigated by simulation. The results reveal that in order to realize negligible packet drop, the wavelengths usage can be reduced by half while a significant improvement in packet delay especially under high traffic load can be achieved by employing PLNC over POI.

  17. A predictive transport modeling code for ICRF-heated tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, C.K.; Hwang, D.Q.; Houlberg, W.

    In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3.more » Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5.« less

  18. TRIQS: A toolbox for research on interacting quantum systems

    NASA Astrophysics Data System (ADS)

    Parcollet, Olivier; Ferrero, Michel; Ayral, Thomas; Hafermann, Hartmut; Krivenko, Igor; Messio, Laura; Seth, Priyanka

    2015-11-01

    We present the TRIQS library, a Toolbox for Research on Interacting Quantum Systems. It is an open-source, computational physics library providing a framework for the quick development of applications in the field of many-body quantum physics, and in particular, strongly-correlated electronic systems. It supplies components to develop codes in a modern, concise and efficient way: e.g. Green's function containers, a generic Monte Carlo class, and simple interfaces to HDF5. TRIQS is a C++/Python library that can be used from either language. It is distributed under the GNU General Public License (GPLv3). State-of-the-art applications based on the library, such as modern quantum many-body solvers and interfaces between density-functional-theory codes and dynamical mean-field theory (DMFT) codes are distributed along with it.

  19. LAURA Users Manual: 5.3-48528

    NASA Technical Reports Server (NTRS)

    Mazaheri, Alireza; Gnoffo, Peter A.; Johnston, Chirstopher O.; Kleb, Bil

    2010-01-01

    This users manual provides in-depth information concerning installation and execution of LAURA, version 5. LAURA is a structured, multi-block, computational aerothermodynamic simulation code. Version 5 represents a major refactoring of the original Fortran 77 LAURA code toward a modular structure afforded by Fortran 95. The refactoring improved usability and maintainability by eliminating the requirement for problem-dependent re-compilations, providing more intuitive distribution of functionality, and simplifying interfaces required for multi-physics coupling. As a result, LAURA now shares gas-physics modules, MPI modules, and other low-level modules with the FUN3D unstructured-grid code. In addition to internal refactoring, several new features and capabilities have been added, e.g., a GNU-standard installation process, parallel load balancing, automatic trajectory point sequencing, free-energy minimization, and coupled ablation and flowfield radiation.

  20. LAURA Users Manual: 5.5-64987

    NASA Technical Reports Server (NTRS)

    Mazaheri, Alireza; Gnoffo, Peter A.; Johnston, Christopher O.; Kleb, William L.

    2013-01-01

    This users manual provides in-depth information concerning installation and execution of LAURA, version 5. LAURA is a structured, multi-block, computational aerothermodynamic simulation code. Version 5 represents a major refactoring of the original Fortran 77 LAURA code toward a modular structure afforded by Fortran 95. The refactoring improved usability and maintain ability by eliminating the requirement for problem dependent recompilations, providing more intuitive distribution of functionality, and simplifying interfaces required for multi-physics coupling. As a result, LAURA now shares gas-physics modules, MPI modules, and other low-level modules with the Fun3D unstructured-grid code. In addition to internal refactoring, several new features and capabilities have been added, e.g., a GNU standard installation process, parallel load balancing, automatic trajectory point sequencing, free-energy minimization, and coupled ablation and flowfield radiation.

  1. LAURA Users Manual: 5.4-54166

    NASA Technical Reports Server (NTRS)

    Mazaheri, Alireza; Gnoffo, Peter A.; Johnston, Christopher O.; Kleb, Bil

    2011-01-01

    This users manual provides in-depth information concerning installation and execution of Laura, version 5. Laura is a structured, multi-block, computational aerothermodynamic simulation code. Version 5 represents a major refactoring of the original Fortran 77 Laura code toward a modular structure afforded by Fortran 95. The refactoring improved usability and maintainability by eliminating the requirement for problem dependent re-compilations, providing more intuitive distribution of functionality, and simplifying interfaces required for multi-physics coupling. As a result, Laura now shares gas-physics modules, MPI modules, and other low-level modules with the Fun3D unstructured-grid code. In addition to internal refactoring, several new features and capabilities have been added, e.g., a GNU-standard installation process, parallel load balancing, automatic trajectory point sequencing, free-energy minimization, and coupled ablation and flowfield radiation.

  2. A predictive transport modeling code for ICRF-heated tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, C.K.; Hwang, D.Q.; Houlberg, W.

    1992-02-01

    In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3.more » Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5.« less

  3. Efficient modeling of laser-plasma accelerator staging experiments using INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.

    2017-03-01

    The computational framework INF&RNO (INtegrated Fluid & paRticle simulatioN cOde) allows for fast and accurate modeling, in 2D cylindrical geometry, of several aspects of laser-plasma accelerator physics. In this paper, we present some of the new features of the code, including the quasistatic Particle-In-Cell (PIC)/fluid modality, and describe using different computational grids and time steps for the laser envelope and the plasma wake. These and other features allow for a speedup of several orders of magnitude compared to standard full 3D PIC simulations while still retaining physical fidelity. INF&RNO is used to support the experimental activity at the BELLA Center, and we will present an example of the application of the code to the laser-plasma accelerator staging experiment.

  4. An integrated radiation physics computer code system.

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Harris, D. W.

    1972-01-01

    An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.

  5. OPTIMASS: A package for the minimization of kinematic mass functions with constraints

    DOE PAGES

    Cho, Won Sang; Gainer, James S.; Kim, Doojin; ...

    2016-01-07

    Reconstructed mass variables, such as M 2, M 2C, M* T, and M T2 W, play an essential role in searches for new physics at hadron colliders. The calculation of these variables generally involves constrained minimization in a large parameter space, which is numerically challenging. We provide a C++ code, Optimass, which interfaces with the Minuit library to perform this constrained minimization using the Augmented Lagrangian Method. The code can be applied to arbitrarily general event topologies, thus allowing the user to significantly extend the existing set of kinematic variables. Here, we describe this code, explain its physics motivation, andmore » demonstrate its use in the analysis of the fully leptonic decay of pair-produced top quarks using M 2 variables.« less

  6. Filter-fluorescer measurement of low-voltage simulator x-ray energy spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baldwin, G.T.; Craven, R.E.

    X-ray energy spectra of the Maxwell Laboratories MBS and Physics International Pulserad 737 were measured using an eight-channel filter-fluorescer array. The PHOSCAT computer code was used to calculate channel response functions, and the UFO code to unfold spectrum.

  7. 7 CFR 4274.337 - Other regulatory requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ....337 Agriculture Regulations of the Department of Agriculture (Continued) RURAL BUSINESS-COOPERATIVE... recipient on the basis of sex, marital status, race, color, religion, national origin, age, physical or... of one of the following model building codes or the latest edition of that code providing an...

  8. Global Coordinates and Exact Aberration Calculations Applied to Physical Optics Modeling of Complex Optical Systems

    NASA Astrophysics Data System (ADS)

    Lawrence, G.; Barnard, C.; Viswanathan, V.

    1986-11-01

    Historically, wave optics computer codes have been paraxial in nature. Folded systems could be modeled by "unfolding" the optical system. Calculation of optical aberrations is, in general, left for the analyst to do with off-line codes. While such paraxial codes were adequate for the simpler systems being studied 10 years ago, current problems such as phased arrays, ring resonators, coupled resonators, and grazing incidence optics require a major advance in analytical capability. This paper describes extension of the physical optics codes GLAD and GLAD V to include a global coordinate system and exact ray aberration calculations. The global coordinate system allows components to be positioned and rotated arbitrarily. Exact aberrations are calculated for components in aligned or misaligned configurations by using ray tracing to compute optical path differences and diffraction propagation. Optical path lengths between components and beam rotations in complex mirror systems are calculated accurately so that coherent interactions in phased arrays and coupled devices may be treated correctly.

  9. High-Fidelity Coupled Monte-Carlo/Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Ivanov, Aleksandar; Sanchez, Victor; Ivanov, Kostadin

    2014-06-01

    Monte Carlo methods have been used as reference reactor physics calculation tools worldwide. The advance in computer technology allows the calculation of detailed flux distributions in both space and energy. In most of the cases however, those calculations are done under the assumption of homogeneous material density and temperature distributions. The aim of this work is to develop a consistent methodology for providing realistic three-dimensional thermal-hydraulic distributions by coupling the in-house developed sub-channel code SUBCHANFLOW with the standard Monte-Carlo transport code MCNP. In addition to the innovative technique of on-the fly material definition, a flux-based weight-window technique has been introduced to improve both the magnitude and the distribution of the relative errors. Finally, a coupled code system for the simulation of steady-state reactor physics problems has been developed. Besides the problem of effective feedback data interchange between the codes, the treatment of temperature dependence of the continuous energy nuclear data has been investigated.

  10. Method for transition prediction in high-speed boundary layers, phase 2

    NASA Astrophysics Data System (ADS)

    Herbert, T.; Stuckert, G. K.; Lin, N.

    1993-09-01

    The parabolized stability equations (PSE) are a new and more reliable approach to analyzing the stability of streamwise varying flows such as boundary layers. This approach has been previously validated for idealized incompressible flows. Here, the PSE are formulated for highly compressible flows in general curvilinear coordinates to permit the analysis of high-speed boundary-layer flows over fairly general bodies. Vigorous numerical studies are carried out to study convergence and accuracy of the linear-stability code LSH and the linear/nonlinear PSE code PSH. Physical interfaces are set up to analyze the M = 8 boundary layer over a blunt cone calculated by using a thin-layer Navier Stokes (TNLS) code and the flow over a sharp cone at angle of attack calculated using the AFWAL parabolized Navier-Stokes (PNS) code. While stability and transition studies at high speeds are far from routine, the method developed here is the best tool available to research the physical processes in high-speed boundary layers.

  11. Object-Oriented/Data-Oriented Design of a Direct Simulation Monte Carlo Algorithm

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2014-01-01

    Over the past decade, there has been much progress towards improved phenomenological modeling and algorithmic updates for the direct simulation Monte Carlo (DSMC) method, which provides a probabilistic physical simulation of gas Rows. These improvements have largely been based on the work of the originator of the DSMC method, Graeme Bird. Of primary importance are improved chemistry, internal energy, and physics modeling and a reduction in time to solution. These allow for an expanded range of possible solutions In altitude and velocity space. NASA's current production code, the DSMC Analysis Code (DAC), is well-established and based on Bird's 1994 algorithms written in Fortran 77 and has proven difficult to upgrade. A new DSMC code is being developed in the C++ programming language using object-oriented and data-oriented design paradigms to facilitate the inclusion of the recent improvements and future development activities. The development efforts on the new code, the Multiphysics Algorithm with Particles (MAP), are described, and performance comparisons are made with DAC.

  12. Developing Discontinuous Galerkin Methods for Solving Multiphysics Problems in General Relativity

    NASA Astrophysics Data System (ADS)

    Kidder, Lawrence; Field, Scott; Teukolsky, Saul; Foucart, Francois; SXS Collaboration

    2016-03-01

    Multi-messenger observations of the merger of black hole-neutron star and neutron star-neutron star binaries, and of supernova explosions will probe fundamental physics inaccessible to terrestrial experiments. Modeling these systems requires a relativistic treatment of hydrodynamics, including magnetic fields, as well as neutrino transport and nuclear reactions. The accuracy, efficiency, and robustness of current codes that treat all of these problems is not sufficient to keep up with the observational needs. We are building a new numerical code that uses the Discontinuous Galerkin method with a task-based parallelization strategy, a promising combination that will allow multiphysics applications to be treated both accurately and efficiently on petascale and exascale machines. The code will scale to more than 100,000 cores for efficient exploration of the parameter space of potential sources and allowed physics, and the high-fidelity predictions needed to realize the promise of multi-messenger astronomy. I will discuss the current status of the development of this new code.

  13. Studies of Planet Formation Using a Hybrid N-Body + Planetesimal Code

    NASA Technical Reports Server (NTRS)

    Kenyon, Scott J.

    2004-01-01

    The goal of our proposal was to use a hybrid multi-annulus planetesimal/n-body code to examine the planetesimal theory, one of the two main theories of planet formation. We developed this code to follow the evolution of numerous 1 m to 1 km planetesimals as they collide, merge, and grow into full-fledged planets. Our goal was to apply the code to several well-posed, topical problems in planet formation and to derive observational consequences of the models. We planned to construct detailed models to address two fundamental issues: (1) icy planets: models for icy planet formation will demonstrate how the physical properties of debris disks - including the Kuiper Belt in our solar system - depend on initial conditions and input physics; and (2) terrestrial planets: calculations following the evolution of 1-10 km planetesimals into Earth-mass planets and rings of dust will provide a better understanding of how terrestrial planets form and interact with their environment.

  14. Physics Based Model for Cryogenic Chilldown and Loading. Part IV: Code Structure

    NASA Technical Reports Server (NTRS)

    Luchinsky, D. G.; Smelyanskiy, V. N.; Brown, B.

    2014-01-01

    This is the fourth report in a series of technical reports that describe separated two-phase flow model application to the cryogenic loading operation. In this report we present the structure of the code. The code consists of five major modules: (1) geometry module; (2) solver; (3) material properties; (4) correlations; and finally (5) stability control module. The two key modules - solver and correlations - are further divided into a number of submodules. Most of the physics and knowledge databases related to the properties of cryogenic two-phase flow are included into the cryogenic correlations module. The functional form of those correlations is not well established and is a subject of extensive research. Multiple parametric forms for various correlations are currently available. Some of them are included into correlations module as will be described in details in a separate technical report. Here we describe the overall structure of the code and focus on the details of the solver and stability control modules.

  15. An X-Ray Analysis Database of Photoionization Cross Sections Including Variable Ionization

    NASA Technical Reports Server (NTRS)

    Wang, Ping; Cohen, David H.; MacFarlane, Joseph J.; Cassinelli, Joseph P.

    1997-01-01

    Results of research efforts in the following areas are discussed: review of the major theoretical and experimental data of subshell photoionization cross sections and ionization edges of atomic ions to assess the accuracy of the data, and to compile the most reliable of these data in our own database; detailed atomic physics calculations to complement the database for all ions of 17 cosmically abundant elements; reconciling the data from various sources and our own calculations; and fitting cross sections with functional approximations and incorporating these functions into a compact computer code.Also, efforts included adapting an ionization equilibrium code, tabulating results, and incorporating them into the overall program and testing the code (both ionization equilibrium and opacity codes) with existing observational data. The background and scientific applications of this work are discussed. Atomic physics cross section models and calculations are described. Calculation results are compared with available experimental data and other theoretical data. The functional approximations used for fitting cross sections are outlined and applications of the database are discussed.

  16. SPIN: An Inversion Code for the Photospheric Spectral Line

    NASA Astrophysics Data System (ADS)

    Yadav, Rahul; Mathew, Shibu K.; Tiwary, Alok Ranjan

    2017-08-01

    Inversion codes are the most useful tools to infer the physical properties of the solar atmosphere from the interpretation of Stokes profiles. In this paper, we present the details of a new Stokes Profile INversion code (SPIN) developed specifically to invert the spectro-polarimetric data of the Multi-Application Solar Telescope (MAST) at Udaipur Solar Observatory. The SPIN code has adopted Milne-Eddington approximations to solve the polarized radiative transfer equation (RTE) and for the purpose of fitting a modified Levenberg-Marquardt algorithm has been employed. We describe the details and utilization of the SPIN code to invert the spectro-polarimetric data. We also present the details of tests performed to validate the inversion code by comparing the results from the other widely used inversion codes (VFISV and SIR). The inverted results of the SPIN code after its application to Hinode/SP data have been compared with the inverted results from other inversion codes.

  17. SYMTRAN - A Time-dependent Symmetric Tandem Mirror Transport Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hua, D; Fowler, T

    2004-06-15

    A time-dependent version of the steady-state radial transport model in symmetric tandem mirrors in Ref. [1] has been coded up and first tests performed. Our code, named SYMTRAN, is an adaptation of the earlier SPHERE code for spheromaks, now modified for tandem mirror physics. Motivated by Post's new concept of kinetic stabilization of symmetric mirrors, it is an extension of the earlier TAMRAC rate-equation code omitting radial transport [2], which successfully accounted for experimental results in TMX. The SYMTRAN code differs from the earlier tandem mirror radial transport code TMT in that our code is focused on axisymmetric tandem mirrorsmore » and classical diffusion, whereas TMT emphasized non-ambipolar transport in TMX and MFTF-B due to yin-yang plugs and non-symmetric transitions between the plugs and axisymmetric center cell. Both codes exhibit interesting but different non-linear behavior.« less

  18. Concerns of the Novice Physical Education Teacher

    ERIC Educational Resources Information Center

    Gordon, Evelyn J.

    2016-01-01

    The purpose of this case study was to examine novice physical education teachers in the first and second year of teaching. Participants included two novice physical education teachers, John in Year 1 and Mark in Year 2. Methodology included observations, semistructured interviews, and documents. Data were analyzed using open coding and constant…

  19. Error suppression via complementary gauge choices in Reed-Muller codes

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Jochym-O'Connor, Tomas

    2017-09-01

    Concatenation of two quantum error-correcting codes with complementary sets of transversal gates can provide a means toward universal fault-tolerant quantum computation. We first show that it is generally preferable to choose the inner code with the higher pseudo-threshold to achieve lower logical failure rates. We then explore the threshold properties of a wide range of concatenation schemes. Notably, we demonstrate that the concatenation of complementary sets of Reed-Muller codes can increase the code capacity threshold under depolarizing noise when compared to extensions of previously proposed concatenation models. We also analyze the properties of logical errors under circuit-level noise, showing that smaller codes perform better for all sampled physical error rates. Our work provides new insights into the performance of universal concatenated quantum codes for both code capacity and circuit-level noise.

  20. Variable weight spectral amplitude coding for multiservice OCDMA networks

    NASA Astrophysics Data System (ADS)

    Seyedzadeh, Saleh; Rahimian, Farzad Pour; Glesk, Ivan; Kakaee, Majid H.

    2017-09-01

    The emergence of heterogeneous data traffic such as voice over IP, video streaming and online gaming have demanded networks with capability of supporting quality of service (QoS) at the physical layer with traffic prioritisation. This paper proposes a new variable-weight code based on spectral amplitude coding for optical code-division multiple-access (OCDMA) networks to support QoS differentiation. The proposed variable-weight multi-service (VW-MS) code relies on basic matrix construction. A mathematical model is developed for performance evaluation of VW-MS OCDMA networks. It is shown that the proposed code provides an optimal code length with minimum cross-correlation value when compared to other codes. Numerical results for a VW-MS OCDMA network designed for triple-play services operating at 0.622 Gb/s, 1.25 Gb/s and 2.5 Gb/s are considered.

  1. Photoionization and High Density Gas

    NASA Technical Reports Server (NTRS)

    Kallman, T.; Bautista, M.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    We present results of calculations using the XSTAR version 2 computer code. This code is loosely based on the XSTAR v.1 code which has been available for public use for some time. However it represents an improvement and update in several major respects, including atomic data, code structure, user interface, and improved physical description of ionization/excitation. In particular, it now is applicable to high density situations in which significant excited atomic level populations are likely to occur. We describe the computational techniques and assumptions, and present sample runs with particular emphasis on high density situations.

  2. Decomposition of the optical transfer function: wavefront coding imaging systems

    NASA Astrophysics Data System (ADS)

    Muyo, Gonzalo; Harvey, Andy R.

    2005-10-01

    We describe the mapping of the optical transfer function (OTF) of an incoherent imaging system into a geometrical representation. We show that for defocused traditional and wavefront-coded systems the OTF can be represented as a generalized Cornu spiral. This representation provides a physical insight into the way in which wavefront coding can increase the depth of field of an imaging system and permits analytical quantification of salient OTF parameters, such as the depth of focus, the location of nulls, and amplitude and phase modulation of the wavefront-coding OTF.

  3. Benchmarking atomic physics models for magnetically confined fusion plasma physics experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M.J.; Finkenthal, M.; Soukhanovskii, V.

    In present magnetically confined fusion devices, high and intermediate {ital Z} impurities are either puffed into the plasma for divertor radiative cooling experiments or are sputtered from the high {ital Z} plasma facing armor. The beneficial cooling of the edge as well as the detrimental radiative losses from the core of these impurities can be properly understood only if the atomic physics used in the modeling of the cooling curves is very accurate. To this end, a comprehensive experimental and theoretical analysis of some relevant impurities is undertaken. Gases (Ne, Ar, Kr, and Xe) are puffed and nongases are introducedmore » through laser ablation into the FTU tokamak plasma. The charge state distributions and total density of these impurities are determined from spatial scans of several photometrically calibrated vacuum ultraviolet and x-ray spectrographs (3{endash}1600 {Angstrom}), the multiple ionization state transport code transport code (MIST) and a collisional radiative model. The radiative power losses are measured with bolometery, and the emissivity profiles were measured by a visible bremsstrahlung array. The ionization balance, excitation physics, and the radiative cooling curves are computed from the Hebrew University Lawrence Livermore atomic code (HULLAC) and are benchmarked by these experiments. (Supported by U.S. DOE Grant No. DE-FG02-86ER53214 at JHU and Contract No. W-7405-ENG-48 at LLNL.) {copyright} {ital 1999 American Institute of Physics.}« less

  4. New perspectives on the theory of justice: implications for physical therapy ethics and clinical practice.

    PubMed

    Edwards, Ian; Delany, Clare M; Townsend, Anne F; Swisher, Laura Lee

    2011-11-01

    Recent revisions of physical therapy codes of ethics have included a new emphasis concerning health inequities and social injustice. This emphasis reflects the growing evidence regarding the importance of social determinants of health, epidemiological trends for health service delivery, and the enhanced participation of physical therapists in shaping health care reform in a number of international contexts. This perspective article suggests that there is a "disconnect" between the societal obligations and aspirations expressed in the revised codes and the individualist ethical frameworks that predominantly underpin them. Primary health care is an approach to health care arising from an understanding of the nexus between health and social disadvantage that considers the health needs of patients as expressive of the health needs of the communities of which they are members. It is proposed that re-thinking ethical frameworks expressed in codes of ethics can both inform and underpin practical strategies for working in primary health care. This perspective article provides a new focus on the ethical principle of justice: the ethical principle that arguably remains the least consensually understood and developed in the ethics literature of physical therapy. A relatively recent theory of justice known as the "capability approach to justice" is discussed, along with its potential to assist physical therapy practitioners to further develop moral agency in order to address situations of health inequity and social injustice in clinical practice.

  5. Guidelines for Coding and Entering Ground-Water Data into the Ground-Water Site Inventory Data Base, Version 4.6, U.S. Geological Survey, Washington Water Science Center

    DTIC Science & Technology

    2006-01-01

    collected, code both. Code Type of Analysis Code Type of Analysis A Physical properties I Common ions/trace elements B Common ions J Sanitary analysis and...1) A ground-water site is coded as if it is a single point, not a geographic area or property . (2) Latitude and longitude should be determined at a...terrace from an adjacent upland on one side, and a lowland coast or valley on the other. Due to the effects of erosion, the terrace surface may not be as

  6. Statistical mechanics of broadcast channels using low-density parity-check codes.

    PubMed

    Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David

    2003-03-01

    We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

  7. An Experiment in Scientific Program Understanding

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.; Owen, Karl (Technical Monitor)

    2000-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. Results are shown for three intensively studied codes and seven blind test cases; all test cases are state of the art scientific codes. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.

  8. Photoneutron Reaction Data for Nuclear Physics and Astrophysics

    NASA Astrophysics Data System (ADS)

    Utsunomiya, Hiroaki; Renstrøm, Therese; Tveten, Gry Merete; Gheorghe, Ioana; Filipescu, Dan Mihai; Belyshev, Sergey; Stopani, Konstantin; Wang, Hongwei; Fan, Gongtao; Lui, Yiu-Wing; Symochko, Dmytro; Goriely, Stephane; Larsen, Ann-Cecilie; Siem, Sunniva; Varlamov, Vladimir; Ishkhanov, Boris; Glodariu, Tudor; Krzysiek, Mateusz; Takenaka, Daiki; Ari-izumi, Takashi; Amano, Sho; Miyamoto, Shuji

    2018-05-01

    We discuss the role of photoneutron reaction data in nuclear physics and astrophysics in conjunction with the Coordinated Research Project of the International Atomic Energy Agency with the code F41032 (IAEA-CRP F41032).

  9. The LDCE Particle Impact Experiment as flown on STS-46. [limited duration space environment candidate materials exposure (LDCE)

    NASA Technical Reports Server (NTRS)

    Maag, Carl R.; Tanner, William G.; Borg, Janet; Bibring, Jean-Pierre; Alexander, W. Merle; Maag, Andrew J.

    1992-01-01

    Many materials and techniques have been developed by the authors to sample the flux of particles in Low Earth Orbit (LEO). Though regular in-site sampling of the flux in LEO the materials and techniques have produced data which compliment the data now being amassed by the Long Duration Exposure Facility (LDEF) research activities. Orbital debris models have not been able to describe the flux of particles with d sub p less than or = 0.05 cm, because of the lack of data. Even though LDEF will provide a much needed baseline flux measurement, the continuous monitoring of micron and sub-micron size particles must be carried out. A flight experiment was conducted on the Space Shuttle as part of the LDCE payload to develop an understanding of the Spatial Density (concentration) as a function of size (mass) for particle sizes 1 x 10(exp 6) cm and larger. In addition to the enumeration of particle impacts, it is the intent of the experiment that hypervelocity particles be captured and returned intact. Measurements will be performed post flight to determine the flux density, diameters, and subsequent effects on various optical, thermal control and structural materials. In addition to these principal measurements, the Particle Impact Experiment (PIE) also provides a structure and sample holders for the exposure of passive material samples to the space environment, e.g., thermal cycling, and atomic oxygen, etc. The experiment will measure the optical property changes of mirrors and will provide the fluence of the ambient atomic oxygen environment to other payload experimenters. In order to augment the amount of material returned in a form which can be analyzed, the survivability of the experiment as well as the captured particles will be assessed. Using Sandia National Laboratory's hydrodynamic computer code CTH, hypervelocity impacts on the materials which comprise the experiments have been investigated and the progress of these studies are reported.

  10. Computation of Thermodynamic Equilibria Pertinent to Nuclear Materials in Multi-Physics Codes

    NASA Astrophysics Data System (ADS)

    Piro, Markus Hans Alexander

    Nuclear energy plays a vital role in supporting electrical needs and fulfilling commitments to reduce greenhouse gas emissions. Research is a continuing necessity to improve the predictive capabilities of fuel behaviour in order to reduce costs and to meet increasingly stringent safety requirements by the regulator. Moreover, a renewed interest in nuclear energy has given rise to a "nuclear renaissance" and the necessity to design the next generation of reactors. In support of this goal, significant research efforts have been dedicated to the advancement of numerical modelling and computational tools in simulating various physical and chemical phenomena associated with nuclear fuel behaviour. This undertaking in effect is collecting the experience and observations of a past generation of nuclear engineers and scientists in a meaningful way for future design purposes. There is an increasing desire to integrate thermodynamic computations directly into multi-physics nuclear fuel performance and safety codes. A new equilibrium thermodynamic solver is being developed with this matter as a primary objective. This solver is intended to provide thermodynamic material properties and boundary conditions for continuum transport calculations. There are several concerns with the use of existing commercial thermodynamic codes: computational performance; limited capabilities in handling large multi-component systems of interest to the nuclear industry; convenient incorporation into other codes with quality assurance considerations; and, licensing entanglements associated with code distribution. The development of this software in this research is aimed at addressing all of these concerns. The approach taken in this work exploits fundamental principles of equilibrium thermodynamics to simplify the numerical optimization equations. In brief, the chemical potentials of all species and phases in the system are constrained by estimates of the chemical potentials of the system components at each iterative step, and the objective is to minimize the residuals of the mass balance equations. Several numerical advantages are achieved through this simplification. In particular, computational expense is reduced and the rate of convergence is enhanced. Furthermore, the software has demonstrated the ability to solve systems involving as many as 118 component elements. An early version of the code has already been integrated into the Advanced Multi-Physics (AMP) code under development by the Oak Ridge National Laboratory, Los Alamos National Laboratory, Idaho National Laboratory and Argonne National Laboratory. Keywords: Engineering, Nuclear -- 0552, Engineering, Material Science -- 0794, Chemistry, Mathematics -- 0405, Computer Science -- 0984

  11. 33 CFR 45.1 - Enlistment of personnel.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... to the Uniform Code of Military Justice. (b) Any person desiring to enlist in the Coast Guard should... references, employers, school authorities and physical and mental examinations. Concealment of any fact... enlistment may subject the applicant to criminal penalties under the Uniform Code of Military Justice and/or...

  12. 33 CFR 45.1 - Enlistment of personnel.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... to the Uniform Code of Military Justice. (b) Any person desiring to enlist in the Coast Guard should... references, employers, school authorities and physical and mental examinations. Concealment of any fact... enlistment may subject the applicant to criminal penalties under the Uniform Code of Military Justice and/or...

  13. 33 CFR 45.1 - Enlistment of personnel.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... to the Uniform Code of Military Justice. (b) Any person desiring to enlist in the Coast Guard should... references, employers, school authorities and physical and mental examinations. Concealment of any fact... enlistment may subject the applicant to criminal penalties under the Uniform Code of Military Justice and/or...

  14. 33 CFR 45.1 - Enlistment of personnel.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... to the Uniform Code of Military Justice. (b) Any person desiring to enlist in the Coast Guard should... references, employers, school authorities and physical and mental examinations. Concealment of any fact... enlistment may subject the applicant to criminal penalties under the Uniform Code of Military Justice and/or...

  15. 33 CFR 45.1 - Enlistment of personnel.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... to the Uniform Code of Military Justice. (b) Any person desiring to enlist in the Coast Guard should... references, employers, school authorities and physical and mental examinations. Concealment of any fact... enlistment may subject the applicant to criminal penalties under the Uniform Code of Military Justice and/or...

  16. Cratering at the Icy Satellites: Experimental Insights

    NASA Astrophysics Data System (ADS)

    Bruck Syal, M.; Schultz, P. H.

    2013-12-01

    Impact cratering processes play a central role in shaping the evolution of icy satellites and in guiding interpretations of various geologic features at these bodies. Accurate reconstruction of icy satellite histories depends in large part upon observed impact crater size-frequency distributions. Determining the extent of impact-induced thermal processing and the retention rates for impact-delivered materials of interest, e.g. organics, at these outer solar system moons is of fundamental importance for assessing their habitability and explaining differing geophysical histories. Hence, knowledge of how the impact process operates in ices or ice-rich materials is critically important. Recent progress in the development of water equations of state, coupled with increasingly efficient 3-D hydrocode calculations, has been used to construct careful numerical studies of melt and vapor generation for water ice targets. Complementary to this approach is experimental work to constrain the effects of differing ice target conditions, including porosity, rock mass fraction, and impact angle. Here we report on results from hypervelocity impact experiments (v~5.5 km/s) into water ice targets, performed at the NASA Ames Vertical Gun Range (AVGR). The setup at the AVGR allows for the use of particulate targets, which is useful for examining the effects of target porosity. Photometry and geophysical modeling both suggest that regolith porosity at the icy satellites is significant. We use a combination of half-space and quarter-space geometries, enabling analysis of the impact-generated vapor plume (half-space geometry), along with shock wave and transient crater growth tracking in a cross-sectional view (quarter-space geometry). Evaluating the impact-generated vapor from porous (φ = 0.5) and non-porous water ice targets provides an extension to previously published vapor production results for dolomite and CO2 ice targets. For the case of a 90 degree impact into porous ice, we calculate that 0.6% of the initial kinetic energy of the impactor is partitioned into the internal energy of the vapor plume. This is slightly higher than values determined in prior studies for non-porous CO2 ice (0.2%) [Schultz, 1996]. As CO2 ice possesses a lower vaporization temperature than water ice, this effect strongly suggests a role for porosity in enhancing vaporization. This is expected, as the compaction of porous materials performs additional, irreversible PdV work on the target, causing enhanced partitioning of kinetic energy into internal energy. At oblique impact angles, plume morphology changes dramatically while vaporization is enhanced. Comparing shock wave velocity attenuation in porous materials, including mixes of materials (e.g., quartz sand and porous ice), to numerical results obtained from shock physics codes such as CTH, provides insight into how impacts into porous ice-rich materials can be most accurately numerically modeled.

  17. AX-GADGET: a new code for cosmological simulations of Fuzzy Dark Matter and Axion models

    NASA Astrophysics Data System (ADS)

    Nori, Matteo; Baldi, Marco

    2018-05-01

    We present a new module of the parallel N-Body code P-GADGET3 for cosmological simulations of light bosonic non-thermal dark matter, often referred as Fuzzy Dark Matter (FDM). The dynamics of the FDM features a highly non-linear Quantum Potential (QP) that suppresses the growth of structures at small scales. Most of the previous attempts of FDM simulations either evolved suppressed initial conditions, completely neglecting the dynamical effects of QP throughout cosmic evolution, or resorted to numerically challenging full-wave solvers. The code provides an interesting alternative, following the FDM evolution without impairing the overall performance. This is done by computing the QP acceleration through the Smoothed Particle Hydrodynamics (SPH) routines, with improved schemes to ensure precise and stable derivatives. As an extension of the P-GADGET3 code, it inherits all the additional physics modules implemented up to date, opening a wide range of possibilities to constrain FDM models and explore its degeneracies with other physical phenomena. Simulations are compared with analytical predictions and results of other codes, validating the QP as a crucial player in structure formation at small scales.

  18. Development and preliminary verification of the 3D core neutronic code: COCO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, H.; Mo, K.; Li, W.

    As the recent blooming economic growth and following environmental concerns (China)) is proactively pushing forward nuclear power development and encouraging the tapping of clean energy. Under this situation, CGNPC, as one of the largest energy enterprises in China, is planning to develop its own nuclear related technology in order to support more and more nuclear plants either under construction or being operation. This paper introduces the recent progress in software development for CGNPC. The focus is placed on the physical models and preliminary verification results during the recent development of the 3D Core Neutronic Code: COCO. In the COCO code,more » the non-linear Green's function method is employed to calculate the neutron flux. In order to use the discontinuity factor, the Neumann (second kind) boundary condition is utilized in the Green's function nodal method. Additionally, the COCO code also includes the necessary physical models, e.g. single-channel thermal-hydraulic module, burnup module, pin power reconstruction module and cross-section interpolation module. The preliminary verification result shows that the COCO code is sufficient for reactor core design and analysis for pressurized water reactor (PWR). (authors)« less

  19. 3D Multispecies Nonlinear Perturbative Particle Simulation of Intense Nonneutral Particle Beams (Research supported by the Department of Energy and the Short Pulse Spallation Source Project and LANSCE Division of LANL.)

    NASA Astrophysics Data System (ADS)

    Qin, Hong; Davidson, Ronald C.; Lee, W. Wei-Li

    1999-11-01

    The Beam Equilibrium Stability and Transport (BEST) code, a 3D multispecies nonlinear perturbative particle simulation code, has been developed to study collective effects in intense charged particle beams described self-consistently by the Vlasov-Maxwell equations. A Darwin model is adopted for transverse electromagnetic effects. As a 3D multispecies perturbative particle simulation code, it provides several unique capabilities. Since the simulation particles are used to simulate only the perturbed distribution function and self-fields, the simulation noise is reduced significantly. The perturbative approach also enables the code to investigate different physics effects separately, as well as simultaneously. The code can be easily switched between linear and nonlinear operation, and used to study both linear stability properties and nonlinear beam dynamics. These features, combined with 3D and multispecies capabilities, provides an effective tool to investigate the electron-ion two-stream instability, periodically focused solutions in alternating focusing fields, and many other important problems in nonlinear beam dynamics and accelerator physics. Applications to the two-stream instability are presented.

  20. Meanline Analysis of Turbines with Choked Flow in the Object-Oriented Turbomachinery Analysis Code

    NASA Technical Reports Server (NTRS)

    Hendricks, Eric S.

    2016-01-01

    The Object-Oriented Turbomachinery Analysis Code (OTAC) is a new meanline/streamline turbomachinery modeling tool being developed at NASA GRC. During the development process, a limitation of the code was discovered in relation to the analysis of choked flow in axial turbines. This paper describes the relevant physics for choked flow as well as the changes made to OTAC to enable analysis in this flow regime.

  1. A domain specific language for performance portable molecular dynamics algorithms

    NASA Astrophysics Data System (ADS)

    Saunders, William Robert; Grant, James; Müller, Eike Hermann

    2018-03-01

    Developers of Molecular Dynamics (MD) codes face significant challenges when adapting existing simulation packages to new hardware. In a continuously diversifying hardware landscape it becomes increasingly difficult for scientists to be experts both in their own domain (physics/chemistry/biology) and specialists in the low level parallelisation and optimisation of their codes. To address this challenge, we describe a "Separation of Concerns" approach for the development of parallel and optimised MD codes: the science specialist writes code at a high abstraction level in a domain specific language (DSL), which is then translated into efficient computer code by a scientific programmer. In a related context, an abstraction for the solution of partial differential equations with grid based methods has recently been implemented in the (Py)OP2 library. Inspired by this approach, we develop a Python code generation system for molecular dynamics simulations on different parallel architectures, including massively parallel distributed memory systems and GPUs. We demonstrate the efficiency of the auto-generated code by studying its performance and scalability on different hardware and compare it to other state-of-the-art simulation packages. With growing data volumes the extraction of physically meaningful information from the simulation becomes increasingly challenging and requires equally efficient implementations. A particular advantage of our approach is the easy expression of such analysis algorithms. We consider two popular methods for deducing the crystalline structure of a material from the local environment of each atom, show how they can be expressed in our abstraction and implement them in the code generation framework.

  2. HEPLIB `91: International users meeting on the support and environments of high energy physics computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnstad, H.

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less

  3. HEPLIB 91: International users meeting on the support and environments of high energy physics computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnstad, H.

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less

  4. Nonuniform code concatenation for universal fault-tolerant quantum computing

    NASA Astrophysics Data System (ADS)

    Nikahd, Eesa; Sedighi, Mehdi; Saheb Zamani, Morteza

    2017-09-01

    Using transversal gates is a straightforward and efficient technique for fault-tolerant quantum computing. Since transversal gates alone cannot be computationally universal, they must be combined with other approaches such as magic state distillation, code switching, or code concatenation to achieve universality. In this paper we propose an alternative approach for universal fault-tolerant quantum computing, mainly based on the code concatenation approach proposed in [T. Jochym-O'Connor and R. Laflamme, Phys. Rev. Lett. 112, 010505 (2014), 10.1103/PhysRevLett.112.010505], but in a nonuniform fashion. The proposed approach is described based on nonuniform concatenation of the 7-qubit Steane code with the 15-qubit Reed-Muller code, as well as the 5-qubit code with the 15-qubit Reed-Muller code, which lead to two 49-qubit and 47-qubit codes, respectively. These codes can correct any arbitrary single physical error with the ability to perform a universal set of fault-tolerant gates, without using magic state distillation.

  5. 25 CFR 11.440 - Tampering with or fabricating physical evidence.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 1 2014-04-01 2014-04-01 false Tampering with or fabricating physical evidence. 11.440 Section 11.440 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Criminal Offenses § 11.440 Tampering with or fabricating physical evidence...

  6. 25 CFR 11.440 - Tampering with or fabricating physical evidence.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false Tampering with or fabricating physical evidence. 11.440 Section 11.440 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Criminal Offenses § 11.440 Tampering with or fabricating physical evidence...

  7. Keep It Simple. Teaching Tips for Special Olympic Athletes.

    ERIC Educational Resources Information Center

    Johnston, Judith E.; And Others

    1996-01-01

    Physical educators can help Special Olympics athletes learn cross-lateral delivery techniques for bowling or throwing softballs by color coding the throwing arm and opposing foot. The article explains color coding, presenting teaching tips for both sports. A series of workshops on modifying exercise principles for individuals with physical…

  8. Computer codes developed and under development at Lewis

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1992-01-01

    The objective of this summary is to provide a brief description of: (1) codes developed or under development at LeRC; and (2) the development status of IPACS with some typical early results. The computer codes that have been developed and/or are under development at LeRC are listed in the accompanying charts. This list includes: (1) the code acronym; (2) select physics descriptors; (3) current enhancements; and (4) present (9/91) code status with respect to its availability and documentation. The computer codes list is grouped by related functions such as: (1) composite mechanics; (2) composite structures; (3) integrated and 3-D analysis; (4) structural tailoring; and (5) probabilistic structural analysis. These codes provide a broad computational simulation infrastructure (technology base-readiness) for assessing the structural integrity/durability/reliability of propulsion systems. These codes serve two other very important functions: they provide an effective means of technology transfer; and they constitute a depository of corporate memory.

  9. Vectorized Monte Carlo methods for reactor lattice analysis

    NASA Technical Reports Server (NTRS)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  10. Majorana fermion surface code for universal quantum computation

    DOE PAGES

    Vijay, Sagar; Hsieh, Timothy H.; Fu, Liang

    2015-12-10

    In this study, we introduce an exactly solvable model of interacting Majorana fermions realizing Z 2 topological order with a Z 2 fermion parity grading and lattice symmetries permuting the three fundamental anyon types. We propose a concrete physical realization by utilizing quantum phase slips in an array of Josephson-coupled mesoscopic topological superconductors, which can be implemented in a wide range of solid-state systems, including topological insulators, nanowires, or two-dimensional electron gases, proximitized by s-wave superconductors. Our model finds a natural application as a Majorana fermion surface code for universal quantum computation, with a single-step stabilizer measurement requiring no physicalmore » ancilla qubits, increased error tolerance, and simpler logical gates than a surface code with bosonic physical qubits. We thoroughly discuss protocols for stabilizer measurements, encoding and manipulating logical qubits, and gate implementations.« less

  11. Parametric bicubic spline and CAD tools for complex targets shape modelling in physical optics radar cross section prediction

    NASA Astrophysics Data System (ADS)

    Delogu, A.; Furini, F.

    1991-09-01

    Increasing interest in radar cross section (RCS) reduction is placing new demands on theoretical, computation, and graphic techniques for calculating scattering properties of complex targets. In particular, computer codes capable of predicting the RCS of an entire aircraft at high frequency and of achieving RCS control with modest structural changes, are becoming of paramount importance in stealth design. A computer code, evaluating the RCS of arbitrary shaped metallic objects that are computer aided design (CAD) generated, and its validation with measurements carried out using ALENIA RCS test facilities are presented. The code, based on the physical optics method, is characterized by an efficient integration algorithm with error control, in order to contain the computer time within acceptable limits, and by an accurate parametric representation of the target surface in terms of bicubic splines.

  12. SU-A-210-01: Why Should We Learn Radiation Oncology Billing?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, H.

    The purpose of this student annual meeting is to address topics that are becoming more relevant to medical physicists, but are not frequently addressed, especially for students and trainees just entering the field. The talk is divided into two parts: medical billing and regulations. Hsinshun Wu – Why should we learn radiation oncology billing? Many medical physicists do not like to be involved with medical billing or coding during their career. They believe billing is not their responsibility and sometimes they even refuse to participate in the billing process if given the chance. This presentation will talk about a physicist’smore » long career and share his own experience that knowing medical billing is not only important and necessary for every young medical physicist, but that good billing knowledge could provide a valuable contribution to his/her medical physics development. Learning Objectives: The audience will learn the basic definition of Current Procedural Terminology (CPT) codes performed in a Radiation Oncology Department. Understand the differences between hospital coding and physician-based or freestanding coding. Apply proper CPT coding for each Radiation Oncology procedure. Each procedure with its specific CPT code will be discussed in detail. The talk will focus on the process of care and use of actual workflow to understand each CPT code. Example coding of a typical Radiation Oncology procedure. Special procedure coding such as brachytherapy, proton therapy, radiosurgery, and SBRT. Maryann Abogunde – Medical physics opportunities at the Nuclear Regulatory Commission (NRC) The NRC’s responsibilities include the regulation of medical uses of byproduct (radioactive) materials and oversight of medical use end-users (licensees) through a combination of regulatory requirements, licensing, safety oversight including inspection and enforcement, operational experience evaluation, and regulatory support activities. This presentation will explore the career options for medical physicists in the NRC, how the NRC interacts with clinical medical physicists, and a physicist’s experience as a regulator. Learning Objectives: Explore non-clinical career pathways for medical physics students and trainees at the Nuclear Regulatory Commission. Overview of NRC medical applications and medical use regulations. Understand the skills needed for physicists as regulators. Abogunde is funded to attend the meeting by her employer, the NRC.« less

  13. ACON: a multipurpose production controller for plasma physics codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snell, C.

    1983-01-01

    ACON is a BCON controller designed to run large production codes on the CTSS Cray-1 or the LTSS 7600 computers. ACON can also be operated interactively, with input from the user's terminal. The controller can run one code or a sequence of up to ten codes during the same job. Options are available to get and save Mass storage files, to perform Historian file updating operations, to compile and load source files, and to send out print and film files. Special features include ability to retry after Mass failures, backup options for saving files, startup messages for the various codes,more » and ability to reserve specified amounts of computer time after successive code runs. ACON's flexibility and power make it useful for running a number of different production codes.« less

  14. Comparison of DAC and MONACO DSMC Codes with Flat Plate Simulation

    NASA Technical Reports Server (NTRS)

    Padilla, Jose F.

    2010-01-01

    Various implementations of the direct simulation Monte Carlo (DSMC) method exist in academia, government and industry. By comparing implementations, deficiencies and merits of each can be discovered. This document reports comparisons between DSMC Analysis Code (DAC) and MONACO. DAC is NASA's standard DSMC production code and MONACO is a research DSMC code developed in academia. These codes have various differences; in particular, they employ distinct computational grid definitions. In this study, DAC and MONACO are compared by having each simulate a blunted flat plate wind tunnel test, using an identical volume mesh. Simulation expense and DSMC metrics are compared. In addition, flow results are compared with available laboratory data. Overall, this study revealed that both codes, excluding grid adaptation, performed similarly. For parallel processing, DAC was generally more efficient. As expected, code accuracy was mainly dependent on physical models employed.

  15. Current and anticipated uses of thermalhydraulic and neutronic codes at PSI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aksan, S.N.; Zimmermann, M.A.; Yadigaroglu, G.

    1997-07-01

    The thermalhydraulic and/or neutronic codes in use at PSI mainly provide the capability to perform deterministic safety analysis for Swiss NPPs and also serve as analysis tools for experimental facilities for LWR and ALWR simulations. In relation to these applications, physical model development and improvements, and assessment of the codes are also essential components of the activities. In this paper, a brief overview is provided on the thermalhydraulic and/or neutronic codes used for safety analysis of LWRs, at PSI, and also of some experiences and applications with these codes. Based on these experiences, additional assessment needs are indicated, together withmore » some model improvement needs. The future needs that could be used to specify both the development of a new code and also improvement of available codes are summarized.« less

  16. Decoy state method for quantum cryptography based on phase coding into faint laser pulses

    NASA Astrophysics Data System (ADS)

    Kulik, S. P.; Molotkov, S. N.

    2017-12-01

    We discuss the photon number splitting attack (PNS) in systems of quantum cryptography with phase coding. It is shown that this attack, as well as the structural equations for the PNS attack for phase encoding, differs physically from the analogous attack applied to the polarization coding. As far as we know, in practice, in all works to date processing of experimental data has been done for phase coding, but using formulas for polarization coding. This can lead to inadequate results for the length of the secret key. These calculations are important for the correct interpretation of the results, especially if it concerns the criterion of secrecy in quantum cryptography.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prasad, M.K.; Kershaw, D.S.; Shaw, M.J.

    The authors present detailed features of the ICF3D hydrodynamics code used for inertial fusion simulations. This code is intended to be a state-of-the-art upgrade of the well-known fluid code, LASNEX. ICF3D employs discontinuous finite elements on a discrete unstructured mesh consisting of a variety of 3D polyhedra including tetrahedra, prisms, and hexahedra. The authors discussed details of how the ROE-averaged second-order convection was applied on the discrete elements, and how the C++ coding interface has helped to simplify implementing the many physics and numerics modules within the code package. The author emphasized the virtues of object-oriented design in large scalemore » projects such as ICF3D.« less

  18. Data Parallel Line Relaxation (DPLR) Code User Manual: Acadia - Version 4.01.1

    NASA Technical Reports Server (NTRS)

    Wright, Michael J.; White, Todd; Mangini, Nancy

    2009-01-01

    Data-Parallel Line Relaxation (DPLR) code is a computational fluid dynamic (CFD) solver that was developed at NASA Ames Research Center to help mission support teams generate high-value predictive solutions for hypersonic flow field problems. The DPLR Code Package is an MPI-based, parallel, full three-dimensional Navier-Stokes CFD solver with generalized models for finite-rate reaction kinetics, thermal and chemical non-equilibrium, accurate high-temperature transport coefficients, and ionized flow physics incorporated into the code. DPLR also includes a large selection of generalized realistic surface boundary conditions and links to enable loose coupling with external thermal protection system (TPS) material response and shock layer radiation codes.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Platania, P., E-mail: platania@ifp.cnr.it; Figini, L.; Farina, D.

    The purpose of this work is the optical modeling and physical performances evaluations of the JT-60SA ECRF launcher system. The beams have been simulated with the electromagnetic code GRASP® and used as input for ECCD calculations performed with the beam tracing code GRAY, capable of modeling propagation, absorption and current drive of an EC Gaussion beam with general astigmatism. Full details of the optical analysis has been taken into account to model the launched beams. Inductive and advanced reference scenarios has been analysed for physical evaluations in the full poloidal and toroidal steering ranges for two slightly different layouts ofmore » the launcher system.« less

  20. An Introduction to Quantum Theory

    NASA Astrophysics Data System (ADS)

    Greensite, Jeff

    2017-02-01

    Written in a lucid and engaging style, the author takes readers from an overview of classical mechanics and the historical development of quantum theory through to advanced topics. The mathematical aspects of quantum theory necessary for a firm grasp of the subject are developed in the early chapters, but an effort is made to motivate that formalism on physical grounds. Including animated figures and their respective Mathematica® codes, this book provides a complete and comprehensive text for students in physics, maths, chemistry and engineering needing an accessible introduction to quantum mechanics. Supplementary Mathematica codes available within Book Information

Top