Science.gov

Sample records for probabilistic shock initiation

  1. Shock initiation of nitromethane

    SciTech Connect

    Yoo, C.S.; Holmes, N.C.

    1993-12-31

    The shock initiation processes of nitromethane have been examined by using a fast time-resolved emission spectroscopy at a two-stage gas gun. a broad, but strong emission has been observed in a spectral range between 350 and 700 nm from shocked nitromethane above 9 GPa. The temporal profile suggests that shocked nitromethane detonates through three characteristic periods, namely an induction period, a hock initiation period, and a thermal explosion period. This paper discusses temporal and chemical characteristics of these periods and present the temperature of the shock-detonating nitromethane at pressures between 9 and 15 GPa.

  2. Augmenting Probabilistic Risk Assesment with Malevolent Initiators

    SciTech Connect

    Curtis Smith; David Schwieder

    2011-11-01

    As commonly practiced, the use of probabilistic risk assessment (PRA) in nuclear power plants only considers accident initiators such as natural hazards, equipment failures, and human error. Malevolent initiators are ignored in PRA, but are considered the domain of physical security, which uses vulnerability assessment based on an officially specified threat (design basis threat). This paper explores the implications of augmenting and extending existing PRA models by considering new and modified scenarios resulting from malevolent initiators. Teaming the augmented PRA models with conventional vulnerability assessments can cost-effectively enhance security of a nuclear power plant. This methodology is useful for operating plants, as well as in the design of new plants. For the methodology, we have proposed an approach that builds on and extends the practice of PRA for nuclear power plants for security-related issues. Rather than only considering 'random' failures, we demonstrated a framework that is able to represent and model malevolent initiating events and associated plant impacts.

  3. Shock Initiation of Damaged Explosives

    SciTech Connect

    Chidester, S K; Vandersall, K S; Tarver, C M

    2009-10-22

    Explosive and propellant charges are subjected to various mechanical and thermal insults that can increase their sensitivity over the course of their lifetimes. To quantify this effect, shock initiation experiments were performed on mechanically and thermally damaged LX-04 (85% HMX, 15% Viton by weight) and PBX 9502 (95% TATB, 5% Kel-F by weight) to obtain in-situ manganin pressure gauge data and run distances to detonation at various shock pressures. We report the behavior of the HMX-based explosive LX-04 that was damaged mechanically by applying a compressive load of 600 psi for 20,000 cycles, thus creating many small narrow cracks, or by cutting wedge shaped parts that were then loosely reassembled, thus creating a few large cracks. The thermally damaged LX-04 charges were heated to 190 C for long enough for the beta to delta solid - solid phase transition to occur, and then cooled to ambient temperature. Mechanically damaged LX-04 exhibited only slightly increased shock sensitivity, while thermally damaged LX-04 was much more shock sensitive. Similarly, the insensitive explosive PBX 9502 was mechanically damaged using the same two techniques. Since PBX 9502 does not undergo a solid - solid phase transition but does undergo irreversible or 'rachet' growth when thermally cycled, thermal damage to PBX 9502 was induced by this procedure. As for LX-04, the thermally damaged PBX 9502 demonstrated a greater shock sensitivity than mechanically damaged PBX 9502. The Ignition and Growth reactive flow model calculated the increased sensitivities by igniting more damaged LX-04 and PBX 9502 near the shock front based on the measured densities (porosities) of the damaged charges.

  4. Shock Initiation of Heterogeneous Explosives

    SciTech Connect

    Reaugh, J E

    2004-05-10

    The fundamental picture that shock initiation in heterogeneous explosives is caused by the linking of hot spots formed at inhomogeneities was put forward by several researchers in the 1950's and 1960's, and more recently. Our work uses the computer hardware and software developed in the Advanced Simulation and Computing (ASC) program of the U.S. Department of Energy to explicitly include heterogeneities at the scale of the explosive grains and to calculate the consequences of realistic although approximate models of explosive behavior. Our simulations are performed with ALE-3D, a three-dimensional, elastic-plastic-hydrodynamic Arbitrary Lagrange-Euler finite-difference program, which includes chemical kinetics and heat transfer, and which is under development at this laboratory. We developed the parameter values for a reactive-flow model to describe the non-ideal detonation behavior of an HMX-based explosive from the results of grain-scale simulations. In doing so, we reduced the number of free parameters that are inferred from comparison with experiment to a single one - the characteristic defect dimension. We also performed simulations of the run to detonation in small volumes of explosive. These simulations illustrate the development of the reaction zone and the acceleration of the shock front as the flame fronts start from hot spots, grow, and interact behind the shock front. In this way, our grain-scale simulations can also connect to continuum experiments directly.

  5. Revisiting Shock Initiation Modeling of Homogeneous Explosives

    NASA Astrophysics Data System (ADS)

    Partom, Yehuda

    2013-04-01

    Shock initiation of homogeneous explosives has been a subject of research since the 1960s, with neat and sensitized nitromethane as the main materials for experiments. A shock initiation model of homogeneous explosives was established in the early 1960s. It involves a thermal explosion event at the shock entrance boundary, which develops into a superdetonation that overtakes the initial shock. In recent years, Sheffield and his group, using accurate experimental tools, were able to observe details of buildup of the superdetonation. There are many papers on modeling shock initiation of heterogeneous explosives, but there are only a few papers on modeling shock initiation of homogeneous explosives. In this article, bulk reaction reactive flow equations are used to model homogeneous shock initiation in an attempt to reproduce experimental data of Sheffield and his group. It was possible to reproduce the main features of the shock initiation process, including thermal explosion, superdetonation, input shock overtake, overdriven detonation after overtake, and the beginning of decay toward Chapman-Jouget (CJ) detonation. The time to overtake (TTO) as function of input pressure was also calculated and compared to the experimental TTO.

  6. Shock Initiation of Energetic Materials at Different Initial Temperatures

    SciTech Connect

    Urtiew, P A; Tarver, C M

    2005-01-14

    Shock initiation is one of the most important properties of energetic materials, which must transition to detonation exactly as intended when intentionally shocked and not detonate when accidentally shocked. The development of manganin pressure gauges that are placed inside the explosive charge and record the buildup of pressure upon shock impact has greatly increased the knowledge of these reactive flows. This experimental data, together with similar data from electromagnetic particle velocity gauges, has allowed us to formulate the Ignition and Growth model of shock initiation and detonation in hydrodynamic computer codes for predictions of shock initiation scenarios that cannot be tested experimentally. An important problem in shock initiation of solid explosives is the change in sensitivity that occurs upon heating (or cooling). Experimental manganin pressure gauge records and the corresponding Ignition and Growth model calculations are presented for two solid explosives, LX-17 (92.5 % triaminotrinitrobenzene (TATB) with 7.5 % Kel-F binder) and LX-04 (85 % octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazine (HMX) with 15 % Viton binder) at several initial temperatures.

  7. Barrier experiment: Shock initiation under complex loading

    SciTech Connect

    Menikoff, Ralph

    2016-01-12

    The barrier experiments are a variant of the gap test; a detonation wave in a donor HE impacts a barrier and drives a shock wave into an acceptor HE. The question we ask is: What is the trade-off between the barrier material and threshold barrier thickness to prevent the acceptor from detonating. This can be viewed from the perspective of shock initiation of the acceptor subject to a complex pressure drive condition. Here we consider key factors which affect whether or not the acceptor undergoes a shock-to-detonation transition. These include the following: shock impedance matches for the donor detonation wave into the barrier and then the barrier shock into the acceptor, the pressure gradient behind the donor detonation wave, and the curvature of detonation front in the donor. Numerical simulations are used to illustrate how these factors affect the reaction in the acceptor.

  8. Simulation of shock-initiated ignition

    NASA Astrophysics Data System (ADS)

    Melguizo-Gavilanes, J.; Rezaeyan, N.; Lopez-Aoyagi, M.; Bauwens, L.

    2010-12-01

    The scenario of detonative ignition in shocked mixture is significant because it is a contributor to deflagration to detonation transition, for example following shock reflections. However, even in one dimension, simulation of ignition between a contact surface or a flame and a shock moving into a combustible mixture is difficult because of the singular nature of the initial conditions. Initially, as the shock starts moving into reactive mixture, the region filled with reactive mixture has zero thickness. On a fixed grid, the number of grid points between the shock and the contact surface increases as the shock moves away from the latter. Due to initial lack of resolution in the region of interest, staircasing may occur, whereby the resulting plots consist of jumps between few values a few grid points, and these numerical artifacts are amplified by the chemistry which is very sensitive to temperature, leading to unreliable results. The formulation is transformed, replacing time and space by time and space over time as the independent variables. This frame of reference corresponds to the self-similar formulation in which the non-reactive problem remains stationary, and the initial conditions are well-resolved. Additionally, a solution obtained from short time perturbation is used as initial condition, at a time still short enough for the perturbation to be very accurate, but long enough so that there is sufficient resolution. The numerical solution to the transformed problem is obtained using an essentially non-oscillatory algorithm, which is adequate not only for the early part of the process, but also for the latter part, when chemistry leads to appearance of a shock and eventually a detonation wave is formed. A validation study was performed and the results were compared with the literature, for single step Arrhenius chemistry. The method and its implementation were found to be effective. Results are presented for values of activation energy ranging from mild to

  9. Shock Initiation Thresholds of Various Energetic Materials

    NASA Astrophysics Data System (ADS)

    Damm, David; Welle, Eric; Yarrington, Cole

    2013-06-01

    Shock initiation threshold data for several energetic materials has been analyzed for both short-pulses and long, sustained shocks. In the limit of long duration shocks, the critical pressure for initiation is governed by the balance between chemical energy release in the vicinity of hotspots and thermal dissipation which cools the hotspot and can quench reactions. The observed trends in critical pressure from one material to the next are related to the thermophysical properties and chemical reaction kinetics of each material. Scaling analysis, combined with hydrocode simulations of collapsing pores has confirmed these trends; however large uncertainty in the reaction kinetics under shock loading prevents an accurate quantitative description of hotspot ignition. For a given pore diameter, scaling analysis allows a quick estimate of the temperature at which the reaction rate will exceed the rate of thermal dissipation. Using published thermophysical property data and reaction kinetics we found that the trend in critical hotspot temperatures for several common materials (e.g. PETN, HMX, HNS, and TATB) matches the observed trend in initiation sensitivity. The hydrocode simulations of pore collapse provide a link between the critical temperature and the initial shock pressure. For these simulations we have used recently published QMD-based equations of state for the fully-dense, crystalline phase and have included the effects of variable specific heat, viscous dissipation, and plastic work. These results will be presented and the need for physically-meaningful reaction rates will be emphasized.

  10. Multiple shock initiation of LX-17

    SciTech Connect

    Tarver, C.M.; Cook, T.M.; Urtiew, P.A.; Tao, W.C.

    1993-07-01

    The response of the insensitive TATB-based high explosive LX-17 to multiple shock impacts is studied experimentally in a four inch gas gun using embedded manganin gauges and numerically using the ignition and growth reactive flow model of shock initiation and detonation. Pressure histories are reported for LX-17 cylinders which are subjected to sustained shock pulses followed by secondary compressions from shocks reflected from metal discs attached to the backs of the explosive targets. These measured and calculated pressure histories show that the threshold for hot spot growth in LX-17 is 7 GPa, that LX-17 can be dead pressed at slightly lower pressures, and that the reaction rates behind reflected shocks increase greatly as the impedance of the metal increases. A study of the response of LX-17 to the collision of two reacting, diverging shocks forming a Mach stem wave inside the LX-17 charge demonstrated that this interaction can result in a high pressure region of sufficient size and strength to cause detonation under certain conditions.

  11. Modeling shock initiation in Composition B

    SciTech Connect

    Murphy, M.J.; Lee, E.L.; Weston, A.M.; Williams, A.E.

    1993-05-01

    A hydrodynamic modeling study of the shock initiation behavior of Composition B explosive was performed using the {open_quotes}Ignition and Growth of Reaction in High Explosive{close_quotes} model developed at the Lawrence Livermore National Laboratory. The HE (heterogeneous explosives) responses were computed using the CALE and DYNA2D hydrocodes and then compared to experimental results. The data from several standard shock initiation and HE performance experiments was used to determine the parameters required for the model. Simulations of the wedge tests (pop plots) and failure diameter tests were found to be sufficient for defining the ignition and growth parameters used in the two term version of the computational model. These coefficients were then applied in the response analysis of several Composition B impact initiation experiments. A description of the methodology used to determine the coefficients and the resulting range of useful application of the ignition and growth of reaction model is described.

  12. Initial Probabilistic Evaluation of Reactor Pressure Vessel Fracture with Grizzly and Raven

    SciTech Connect

    Spencer, Benjamin; Hoffman, William; Sen, Sonat; Rabiti, Cristian; Dickson, Terry; Bass, Richard

    2015-10-01

    The Grizzly code is being developed with the goal of creating a general tool that can be applied to study a variety of degradation mechanisms in nuclear power plant components. The first application of Grizzly has been to study fracture in embrittled reactor pressure vessels (RPVs). Grizzly can be used to model the thermal/mechanical response of an RPV under transient conditions that would be observed in a pressurized thermal shock (PTS) scenario. The global response of the vessel provides boundary conditions for local models of the material in the vicinity of a flaw. Fracture domain integrals are computed to obtain stress intensity factors, which can in turn be used to assess whether a fracture would initiate at a pre-existing flaw. These capabilities have been demonstrated previously. A typical RPV is likely to contain a large population of pre-existing flaws introduced during the manufacturing process. This flaw population is characterized stastistically through probability density functions of the flaw distributions. The use of probabilistic techniques is necessary to assess the likelihood of crack initiation during a transient event. This report documents initial work to perform probabilistic analysis of RPV fracture during a PTS event using a combination of the RAVEN risk analysis code and Grizzly. This work is limited in scope, considering only a single flaw with deterministic geometry, but with uncertainty introduced in the parameters that influence fracture toughness. These results are benchmarked against equivalent models run in the FAVOR code. When fully developed, the RAVEN/Grizzly methodology for modeling probabilistic fracture in RPVs will provide a general capability that can be used to consider a wider variety of vessel and flaw conditions that are difficult to consider with current tools. In addition, this will provide access to advanced probabilistic techniques provided by RAVEN, including adaptive sampling and parallelism, which can dramatically

  13. Shock-initiation chemistry of nitroarenes

    SciTech Connect

    Davis, L.L.; Brower, K.R.

    1997-11-01

    The authors present evidence that the shock-initiation chemistry of nitroarenes is dominated by the intermolecular hydrogen transfer mechanism discussed previously. The acceleration by pressure, kinetic isotope effect, and product distribution are consistent with the bimolecular transition state kinetic isotope effect, and product distribution are consistent with the bimolecular transition state rather than rate-determining C-N homolysis.GC-MS analysis of samples which were subjected to a shock wave generated by detonation of nitromethane shows that nitrobenzene produces aniline and biphenyl, and o-nitrotoluene forms aniline, toluene, o-toluidine and o-cresol, but not anthranil, benzoxazinone, or cyanocyclopentandiene. In isotopic labeling experiments o-nitrotoluene and TNT show extensive H-D exchange on their methyl groups, and C-N bond rupture is not consistent with the formation of aniline from nitrobenzene or nitrotoluene, nor the formation of o-toluidine from o-nitrotoluene. Recent work incorporating fast TOF mass spectroscopy of samples shocked and quenched by adiabatic expansion shows that the initial chemical reactions in shocked solid nitroaromatic explosives proceed along this path.

  14. Trends in shock initiation of heterogeneous explosives

    SciTech Connect

    Howe, P.M.

    1998-07-01

    Part of the difficulty in developing physically based models of shock initiation which have genuine predictive capability is that insufficient constraints are often imposed: models are most often applied to very limited data sets which encompass very narrow parameter ranges. Therefore, it seems to be of considerable value to examine the rather large existing shock initiation database to identify trends, similarities, and differences, which predictive models must describe, if they are to be of genuinely utility. In this paper, existing open-literature data for shock initiation of detonation of heterogeneous explosives in one-dimensional geometries have been examined. The intent was to identify -- and where possible, isolate -- physically measurable and controllable parameter effects. Plastic bonded explosives with a variety of different binders and binder concentrations were examined. Data for different pressed explosive particulate materials and particle size distributions were reviewed. Effects of porosity were examined in both binderless and particle-matrix compositions. Effects of inert and reactive binders, and inert and reactive particle fills were examined. In several instances, the calculated data used by the original authors in their analysis was recalculated to correct for discrepancies and errors in the original analysis.

  15. Trends in shock initiation of heterogeneous explosives

    SciTech Connect

    Howe, P.M.

    1998-12-31

    Various data from the literature on shock initiation were examined to ascertain the relative importance of effects of porosity, particle size, and binder composition upon explosives initiation behavior. Both pure and composite explosives were examined. It was found that the main influence of porosity is manifested through changes in Hugoniot relations. The threshold for initiation was found to be insensitive to porosity, except at very low porosities. The buildup process was found to be weakly dependent upon porosity. Particle size effects were found to depend sensitively upon the nature of the particulates. For inert particles embedded in a reactive continuum, initiation is strongly specific surface area dependent. For HMX particles embedded in inert or reactive continua, particle effects are subtle. Sparse data indicate that binder composition has a small but significant effect upon threshold velocities.

  16. INITIAL WASTE PACKAGE PROBABILISTIC CRITICALITY ANALYSIS: UNCANISTERED FUEL (TBV)

    SciTech Connect

    J.R. Massari

    1995-10-06

    This analysis is prepared by the Mined Geologic Disposal System (MGDS) Waste Package Development Department (WPDD) to provide an assessment of the present waste package design from a criticality risk standpoint, The specific objectives of this initial analysis are to: (1) Establish a process for determining the probability of waste package criticality as a function of time (in terms of a cumulative distribution function, probability distribution function, or expected number of criticalities in a specified time interval) for various waste package concepts; (2) Demonstrate the established process by estimating the probability of criticality as a function of time since emplacement for an intact uncanistered fuel waste package (UCF-WP) configuration; and (3) Identify the dominant sequences leading to waste package criticality for subsequent detailed analysis. The purpose of this analysis is to document and demonstrate the developed process as it has been applied to the UCF-WP. This revision is performed to correct deficiencies in the previous revision and provide further detail on the calculations performed. Due to the current lack of knowledge in a number of areas, every attempt has been made to ensure that the all calculations and assumptions were conservative. This analysis is preliminary in nature, and is intended to be superseded by at least two more versions prior to license application. The information and assumptions used to generate this analysis are unverified and have been globally assigned TBV identifier TBV-059-WPD. Future versions of this analysis will update these results, possibly replacing the global TBV with a small number of TBV's on individual items, with the goal of removing all TBV designations by license application submittal. The final output of this document, the probability of UCF-WP criticality as a function of time, is therefore, also TBV. This document is intended to deal only with the risk of internal criticality with unaltered fuel

  17. Overview of Future of Probabilistic Methods and RMSL Technology and the Probabilistic Methods Education Initiative for the US Army at the SAE G-11 Meeting

    NASA Technical Reports Server (NTRS)

    Singhal, Surendra N.

    2003-01-01

    The SAE G-11 RMSL Division and Probabilistic Methods Committee meeting sponsored by the Picatinny Arsenal during March 1-3, 2004 at Westin Morristown, will report progress on projects for probabilistic assessment of Army system and launch an initiative for probabilistic education. The meeting features several Army and industry Senior executives and Ivy League Professor to provide an industry/government/academia forum to review RMSL technology; reliability and probabilistic technology; reliability-based design methods; software reliability; and maintainability standards. With over 100 members including members with national/international standing, the mission of the G-11s Probabilistic Methods Committee is to enable/facilitate rapid deployment of probabilistic technology to enhance the competitiveness of our industries by better, faster, greener, smarter, affordable and reliable product development.

  18. Initiation of detonation by steady planar incident shock waves

    NASA Astrophysics Data System (ADS)

    Edwards, D. H.; Thomas, G. O.; Williams, T. L.

    1981-11-01

    The initiation of detonation by planar shocks is studied in a vertical shock tube in which a removable diaphragm allows the generated shock to be transmitted into the gas mixture, without any reflection at the interface. Streak schlieren photography confirms that a quasi-steady shock reaction complex is formed prior to the shock acceleration phase. The steady phase enabled the induction delay time to be measured in a direct manner, and microwave interferometry, along with pressure transducers, gave an accurate value for the delay time. The shock acceleration was determined from the locus of the exothermic reaction zone, and it is shown that the time coherence of energy release between particles entering the shock front at different times leads to the formation of reactive centers which are characteristic of mild ignition. Ignition delay data obtained by the incident shock method for oxyacetylene, diluted with nitrogen, are compared with those obtained by the reflected shock technique and shown to have advantages in high heat capacity systems.

  19. Simulations for detonation initiation behind reflected shock waves

    NASA Astrophysics Data System (ADS)

    Takano, Yasunari

    Numerical simulations are carried out for detonation initiation behind reflected shock waves in a shock tube. The two-dimensional thin-layer Navier-Stokes equations with chemical effects are numerically solved by use of a combined method consisting of the Flux-Corrected Transport scheme, the Crank-Nicolson scheme, and a chemical calculation step. Effects of chemical reactions occurring in a shock-heated hydrogen, oxygen, and argon mixture are estimated by using a simplified reaction model: two progress parameters are introduced to take account of induction reactions as well as exothermic reactions. Simulations are carried out referring to several experiments: generation of multidimensional and unstable reaction shock waves; strong and mild ignitions; and reacting shock waves in hydrogen and oxygen diluted in argon mixture.

  20. Initiating solar system formation through stellar shock waves

    NASA Technical Reports Server (NTRS)

    Boss, A. P.; Myhill, E. A.

    1993-01-01

    Isotopic anomalies in presolar grains and other meteoritical components require nucleosynthesis in stellar interiors, condensation into dust grains in stellar envelopes, transport of the grains through the interstellar medium by stellar outflows, and finally injection of the grains into the presolar nebula. The proximity of the presolar cloud to these energetic stellar events suggests that a shock wave from a stellar outflow might have initiated the collapse of an otherwise stable presolar cloud. We have begun to study the interactions of stellar shock waves with thermally supported, dense molecular cloud cores, using a three spatial dimension (3D) radiative hydrodynamics code. Supernova shock waves have been shown by others to destroy quiescent clouds, so we are trying to determine if the much smaller shock speeds found in, e.g., asymptotic giant branch (AGB) star winds, are strong enough to initiate collapse in an otherwise stable, rotating, solar-mass cloud core, without leading to destruction of the cloud.

  1. Pressurized thermal shock probabilistic fracture mechanics sensitivity analysis for Yankee Rowe reactor pressure vessel

    SciTech Connect

    Dickson, T.L.; Cheverton, R.D.; Bryson, J.W.; Bass, B.R.; Shum, D.K.M.; Keeney, J.A.

    1993-08-01

    The Nuclear Regulatory Commission (NRC) requested Oak Ridge National Laboratory (ORNL) to perform a pressurized-thermal-shock (PTS) probabilistic fracture mechanics (PFM) sensitivity analysis for the Yankee Rowe reactor pressure vessel, for the fluences corresponding to the end of operating cycle 22, using a specific small-break-loss- of-coolant transient as the loading condition. Regions of the vessel with distinguishing features were to be treated individually -- upper axial weld, lower axial weld, circumferential weld, upper plate spot welds, upper plate regions between the spot welds, lower plate spot welds, and the lower plate regions between the spot welds. The fracture analysis methods used in the analysis of through-clad surface flaws were those contained in the established OCA-P computer code, which was developed during the Integrated Pressurized Thermal Shock (IPTS) Program. The NRC request specified that the OCA-P code be enhanced for this study to also calculate the conditional probabilities of failure for subclad flaws and embedded flaws. The results of this sensitivity analysis provide the NRC with (1) data that could be used to assess the relative influence of a number of key input parameters in the Yankee Rowe PTS analysis and (2) data that can be used for readily determining the probability of vessel failure once a more accurate indication of vessel embrittlement becomes available. This report is designated as HSST report No. 117.

  2. Shock initiation of bare and covered explosives by projectile impact

    SciTech Connect

    Bahl, K L; Vantine, H C; Weingart, R C

    1981-04-22

    Shock initiation thresholds of bare and covered PBX-9404 and an HMX/TATB explosive called RX-26-AF were measured. The shocks were produced by the impact of flat-nosed and round-nosed steel projectiles in the velocity range of 0.5 to 2.2 km/s. Three types of coverings were used, 2 or 6 mm of tantalum, and a composite of aluminum and plastic. An Eulerian code containing material-strength and explosive-initiation models was used to evaluate our ability to calculate the shock initiation thresholds. These code calculations agreed well with the flat-nosed experimental data, but not so well with the round-nosed data.

  3. Hot spot-derived shock initiation phenomena in heterogeneous nitromethane

    SciTech Connect

    Dattelbaum, Dana M; Sheffield, Stephen A; Stahl, David B; Dattelbaum, Andrew M

    2009-01-01

    The addition of solid silica particles to gelled nitromethane offers a tractable model system for interrogating the role of impedance mismatches as one type of hot spot 'seed' on the initiation behaviors of explosive formulations. Gas gun-driven plate impact experiments are used to produce well-defined shock inputs into nitromethane-silica mixtures containing size-selected silica beads at 6 wt%. The Pop-plots or relationships between shock input pressure and rundistance (or time)-to-detonation for mixtures containing small (1-4 {micro}m) and large (40 {micro}m) beads are presented. Overall, the addition of beads was found to influence the shock sensitivity of the mixtures, with the smaller beads being more sensitizing than the larger beads, lowering the shock initiation threshold for the same run distance to detonation compared with neat nitromethane. In addition, the use of embedded electromagnetic gauges provides detailed information pertaining to the mechanism of the build-up to detonation and associated reactive flow. Of note, an initiation mechanism characteristic of homogeneous liquid explosives, such as nitromethane, was observed in the nitromethane-40 {micro}m diameter silica samples at high shock input pressures, indicating that the influence of hot spots on the initiation process was minimal under these conditions.

  4. Detonation Initiation by Annular Jets and Shock Waves

    DTIC Science & Technology

    2007-11-02

    11,12,13,14,15,16,17,18, 19,20,21,22 to better understand the shock implosion process. The current interest in air-breathing pulse detonation engines ( PDEs ) has led...This technology has yet to be realized and, as a result, current PDEs use initiator tubes sensitized with oxygen 23 or detonate more sensitive mixtures... Detonation Initiation by Annular Jets and Shock Waves Final Report for Award ONR N00014-03 -0931 Joseph E. Shepherd Aeronautics California Institute

  5. Initial Climate Response to a Termination Shock

    NASA Astrophysics Data System (ADS)

    Irvine, Peter

    2015-04-01

    The risk of the termination of a deployment of solar radiation management (SRM) geoengineering has been raised as one of the key concerns about these ideas. Early studies demonstrated that a rapid warming of the climate would follow such a termination with global mean temperatures rapidly rising towards the levels that would have been expected in the absence of SRM geoengineering. Further work has noted the contrasting timescale of the adjustment of global mean temperature and sea-level rise, with sea-levels responding much slower and not reaching the same levels as would have been the case in the absence of SRM geoengineering. Whilst these previous studies have shown the basics of the response to a termination of SRM, a detailed analysis of the climate response in the first months or years of a termination has not been investigated. To conduct such an analysis tens of simulations with a termination of SRM are conducted, starting from the end of a G1 simulation with the HadCM3 model. The termination is initiated in Spring, Summer, Autumn and Winter to investigate whether the response depends on the season. Analyzing these results I find some novel dynamic responses in the initial months and years following a termination of SRM which have not been seen in previous studies which employed decadal-scale averages. These include: A reduction in the global-scale hydrological cycle's intensity in the first weeks following termination, counter to the longer-term increase; An almost instantaneous adjustment of land-mean precipitation to the equilibrium value; And substantial shifts in the pattern of precipitation in the initial years that are distinct from those seen in the equilibrium response and which are characterized by large increases in terrestrial precipitation and runoff in many regions.

  6. Probabilistic fatigue life prediction using ultrasonic inspection data considering equivalent initial flaw size uncertainty

    NASA Astrophysics Data System (ADS)

    Guan, X.; Zhang, J.; Kadau, K.; Zhou, S. K.

    2013-01-01

    This study presents a systematical method for probabilistic fatigue life prediction using ultrasonic inspection data. A probabilistic model to correlate the ultrasonic inspection reported size and the actual size is proposed based on historical data of rotor flaw sizing. Both of the reported size and the actual size are quantified in terms of the equivalent reflector diameter. The equivalent initial flaw size (EIFS) is then calculated based on the actual size for fatigue propagation analysis. All major uncertainties, such as EIFS uncertainty, fatigue crack growth model parameter uncertainty, and experimental data measurement uncertainty are explicitly included in the fatigue life prediction. Bayesian parameter estimation is used to estimate fatigue crack growth model parameters and measurement uncertainties using a limited number of fatigue testing data points. The overall procedure is demonstrated using a Cr-Mo-V rotor segment with ultrasonic inspection data. Interpretations of the probabilistic prediction results are given.

  7. Initial Conditions and Modeling for Shock Driven Turbulence

    NASA Astrophysics Data System (ADS)

    Grinstein, Fernando

    2016-11-01

    We focus on the simulation of shock-driven material mixing driven by flow instabilities and initial conditions. Beyond complex multi-scale resolution of shocks and variable density turbulence, me must address the equally difficult problem of predicting flow transition promoted by energy deposited at the material interfacial layer during the shock interface interactions. Transition involves unsteady large-scale coherent-structure dynamics which can be captured by LES, but not by URANS based on equilibrium turbulence assumptions and single-point-closure modeling. Such URANS is frequently preferred on the engineering end of computation capabilities for full-scale configurations - and with reduced 1D/2D dimensionality being also a common aspect. With suitable initialization around each transition - e.g., reshock, URANS can be used to simulate the subsequent near-equilibrium weakly turbulent flow. We demonstrate 3D state-of-the-art URANS performance in one such flow regime. We simulate the CEA planar shock-tube experiments by Poggi et al. (1998) with an ILES strategy. Laboratory turbulence and mixing data are used to benchmark ILES. In turn, the ILES generated data is used to initialize and as reference to assess state-of-the-art 3D URANS. We find that by prescribing physics-based 3D initial conditions and allowing for 3D flow convection with just enough resolution, the additionally computed dissipation in 3D URANS effectively blends with the modeled dissipation to yield significantly improved statistical predictions.

  8. Initial NIF Shock Timing Experiments: Comparison with Simulation

    NASA Astrophysics Data System (ADS)

    Robey, H. F.; Celliers, P. M.; Boehly, T. R.; Datte, P. S.; Bowers, M. W.; Olson, R. E.; Munro, D. H.; Milovich, J. L.; Jones, O. S.; Nikroo, A.; Kroll, J. J.; Horner, J. B.; Hamza, A. V.; Bhandarkar, S. D.; Giraldez, E.; Castro, C.; Gibson, C. R.; Eggert, J. H.; Smith, R. F.; Park, H.-S.; Young, B. K.; Hsing, W. W.; Landen, O. L.; Meyerhofer, D. D.

    2010-11-01

    Initial experiments are underway to demonstrate the techniques required to tune the shock timing of capsule implosions on the National Ignition Facility (NIF). These experiments use a modified cryogenic hohlraum geometry designed to precisely match the performance of ignition hohlraums. The targets employ a re-entrant Au cone to provide optical access to the shocks as they propagate in the liquid deuterium-filled capsule interior. The strength and timing of the shocks is diagnosed with VISAR (Velocity Interferometer System for Any Reflector) and DANTE. The results of these measurements will be used to set the precision pulse shape for ignition capsule implosions to follow. Experimental results and comparisons with numerical simulation are presented.

  9. Probabilistic Threshold Criterion

    SciTech Connect

    Gresshoff, M; Hrousis, C A

    2010-03-09

    The Probabilistic Shock Threshold Criterion (PSTC) Project at LLNL develops phenomenological criteria for estimating safety or performance margin on high explosive (HE) initiation in the shock initiation regime, creating tools for safety assessment and design of initiation systems and HE trains in general. Until recently, there has been little foundation for probabilistic assessment of HE initiation scenarios. This work attempts to use probabilistic information that is available from both historic and ongoing tests to develop a basis for such assessment. Current PSTC approaches start with the functional form of the James Initiation Criterion as a backbone, and generalize to include varying areas of initiation and provide a probabilistic response based on test data for 1.8 g/cc (Ultrafine) 1,3,5-triamino-2,4,6-trinitrobenzene (TATB) and LX-17 (92.5% TATB, 7.5% Kel-F 800 binder). Application of the PSTC methodology is presented investigating the safety and performance of a flying plate detonator and the margin of an Ultrafine TATB booster initiating LX-17.

  10. Shock initiation experiments on ratchet grown PBX 9502

    SciTech Connect

    Gustavsen, Richard L; Thompson, Darla G; Olinger, Barton W; Deluca, Racci; Bartram, Brian D; Pierce, Timothy H; Sanchez, Nathaniel J

    2010-01-01

    This study compares the shock initiation behavior of PBX 9502 pressed to less than nominal density (nominal density is 1.890 {+-} 0.005 g/cm{sup 3}) with PBX 9502 pressed to nominal density and then ''ratchet grown'' to low density. PBX 9502 is an insensitive plastic bonded explosive consisting of 95 weight % dry-aminated tri-amino-tri-nitro-benzene (TATB) and 5 weight % Kel-F 800 plastic binder. ''Ratchet growth'' - an irreversible increase in specific volume - occurs when an explosive based on TATB is temperature cycled. The design of our study is as follows: PBX 9502, all from the same lot, received the following four treatments. Samples in the first group were pressed to less than nominal density. These were not ratchet grown and used as a baseline. Samples in the second group were pressed to nominal density and then ratchet grown by temperature cycling 30 times between -54 C and +80 C. Samples in the final group were pressed to nominal density and cut into 100 mm by 25.4 mm diameter cylinders. During thermal cycling the cylinders were axially constrained by a 100 psi load. Samples for shock initiation experiments were cut perpendicular (disks) and parallel (slabs) to the axial load. The four sample groups can be summarized with the terms pressed low, ratchet grown/no load, axial load/disks, and axial load/slabs. All samples were shock initiated with nearly identical inputs in plate impact experiments carried out on a gas gun. Wave profiles were measured after propagation through 3, 4, 5, and 6 mm of explosive. Side by side comparison of wave profiles from different samples is used as a measure of relative sensitivity. All reduced density samples were more shock sensitive than nominal density PBX 9502. Differences in shock sensitivity between ratchet grown and pressed to low density PBX 9502 were small, but the low density pressings are slightly more sensitive than the ratchet grown samples.

  11. Initial resuscitation and management of pediatric septic shock

    PubMed Central

    Martin, Kelly; Weiss, Scott L.

    2015-01-01

    The pediatric sepsis syndrome remains a common cause of morbidity, mortality, and health care utilization costs worldwide. The initial resuscitation and management of pediatric sepsis is focused on 1) rapid recognition of abnormal tissue perfusion and restoration of adequate cardiovascular function, 2) eradication of the inciting invasive infection, including prompt administration of empiric broad-spectrum antimicrobial medications, and 3) supportive care of organ system dysfunction. Efforts to improve early and aggressive initial resuscitation and ongoing management strategies have improved outcomes in pediatric severe sepsis and septic shock, though many questions still remain as to the optimal therapeutic strategies for many patients. In this article, we will briefly review the definitions, epidemiology, clinical manifestations, and pathophysiology of sepsis and provide an extensive overview of both current and novel therapeutic strategies used to resuscitate and manage pediatric patients with severe sepsis and septic shock. PMID:25604591

  12. Non-shock initiation model for explosive families : experimental results.

    SciTech Connect

    Anderson, Mark U.; Jensen, Charles B.; Todd, Steven N.; Hugh, Chance G.; Caipen, Terry L.

    2010-03-01

    The 'DaMaGe-Initiated-Reaction' (DMGIR) computational model has been developed to predict the response of high explosives to non-shock mechanical insults. The distinguishing feature of this model is the introduction of a damage variable, which relates the evolution of damage to the initiation of a reaction in the explosive, and its growth to detonation. Specifically designed experiments were used to study the initiation process of each explosive family with embedded shock sensors and optical diagnostics. The experimental portion of this model development began with a study of PBXN-5 to develop DMGIR model coefficients for the rigid plastic bonded family, followed by studies of the cast, and bulk-moldable explosive families. The experimental results show an initiation mechanism that is related to input energy and material damage, with well defined initiation thresholds for each explosive family. These initiation details will extend the predictive capability of the DMGIR model from the rigid family into the cast and bulk-moldable families.

  13. Shock initiation studies on high concentration hydrogen peroxide

    SciTech Connect

    Sheffield, Stephen A; Dattelbaum, Dana M; Stahl, David B; Gibson, L. Lee; Bartram, Brian D.

    2009-01-01

    Concentrated hydrogen peroxide (H{sub 2}O{sub 2}) has been known to detonate for many years. However, because of its reactivity and the difficulty in handling and confining it, along with the large critical diameter, few studies providing basic information about the initiation and detonation properties have been published. We are conducting a study to understand and quantify the initiation and detonation properties of highly concentrated H{sub 2}O{sub 2} using a gas-driven two-stage gun to produce well defined shock inputs. Multiple magnetic gauges are used to make in-situ measurements of the growth of reaction and subsequent detonation in the liquid. These experiments are designed to be one-dimensional to eliminate any difficulties that might be encountered with large critical diameters. Because of the concern of the reactivity of the H{sub 2}O{sub 2} with the confining materials, a remote loading system has been developed. The gun is pressurized, then the cell is filled and the experiment shot within less than three minutes. TV cameras are attached to the target so the cell filling can be monitored. Several experiments have been completed on {approx}98 wt % H{sub 2}O{sub 2}/H{sub 2}O mixtures; initiation has been observed in some experiments that shows homogeneous shock initiation behavior. The initial shock pressurizes and heats the mixture. After an induction time, a thermal explosion type reaction produces an evolving reactive wave that strengthens and eventually overdrives the first wave producing a detonation. From these measurements, we have determined unreacted Hugoniot information, times (distances) to detonation (Pop-plot points) that indicate low sensitivity, and detonation velocities of high concentration H{sub 2}O{sub 2}/H{sub 2}O solutions that agree with earlier estimates.

  14. Testing and modeling of PBX-9591 shock initiation

    SciTech Connect

    Lam, Kim; Foley, Timothy; Novak, Alan; Dickson, Peter; Parker, Gary

    2010-01-01

    This paper describes an ongoing effort to develop a detonation sensitivity test for PBX-9501 that is suitable for studying pristine and damaged HE. The approach involves testing and comparing the sensitivities of HE pressed to various densities and those of pre-damaged samples with similar porosities. The ultimate objectives are to understand the response of pre-damaged HE to shock impacts and to develop practical computational models for use in system analysis codes for HE safety studies. Computer simulation with the CTH shock physics code is used to aid the experimental design and analyze the test results. In the calculations, initiation and growth or failure of detonation are modeled with the empirical HVRB model. The historical LANL SSGT and LSGT were reviewed and it was determined that a new, modified gap test be developed to satisfy the current requirements. In the new test, the donor/spacer/acceptor assembly is placed in a holder that is designed to work with fixtures for pre-damaging the acceptor sample. CTH simulations were made of the gap test with PBX-9501 samples pressed to three different densities. The calculated sensitivities were validated by test observations. The agreement between the computed and experimental critical gap thicknesses, ranging from 9 to 21 mm under various test conditions, is well within 1 mm. These results show that the numerical modeling is a valuable complement to the experimental efforts in studying and understanding shock initiation of PBX-9501.

  15. Shock wave initiated by an ion passing through liquid water

    NASA Astrophysics Data System (ADS)

    Surdutovich, Eugene; Solov'Yov, Andrey V.

    2010-11-01

    We investigate the shock wave produced by an energetic ion in liquid water. This wave is initiated by a rapid energy loss when the ion moves through the Bragg peak. The energy is transferred from the ion to secondary electrons, which then transfer it to the water molecules. The pressure in the overheated water increases by several orders of magnitude and drives a cylindrical shock wave on a nanometer scale. This wave eventually weakens as the front expands further; but before that, it may contribute to DNA damage due to large pressure gradients developed within a few nanometers from the ion’s trajectory. This mechanism of DNA damage may be a very important contribution to the direct chemical effects of low-energy electrons and holes.

  16. Multiple-shock initiation via statistical crack mechanics

    SciTech Connect

    Dienes, J.K.; Kershner, J.D.

    1998-12-31

    Statistical Crack Mechanics (SCRAM) is a theoretical approach to the behavior of brittle materials that accounts for the behavior of an ensemble of microcracks, including their opening, shear, growth, and coalescence. Mechanical parameters are based on measured strain-softening behavior. In applications to explosive and propellant sensitivity it is assumed that closed cracks act as hot spots, and that the heating due to interfacial friction initiates reactions which are modeled as one-dimensional heat flow with an Arrhenius source term, and computed in a subscale grid. Post-ignition behavior of hot spots is treated with the burn model of Ward, Son and Brewster. Numerical calculations using SCRAM-HYDROX are compared with the multiple-shock experiments of Mulford et al. in which the particle velocity in PBX 9501 is measured with embedded wires, and reactions are initiated and quenched.

  17. Stochastic shock response spectrum decomposition method based on probabilistic definitions of temporal peak acceleration, spectral energy, and phase lag distributions of mechanical impact pyrotechnic shock test data

    NASA Astrophysics Data System (ADS)

    Hwang, James Ho-Jin; Duran, Adam

    2016-08-01

    Most of the times pyrotechnic shock design and test requirements for space systems are provided in Shock Response Spectrum (SRS) without the input time history. Since the SRS does not describe the input or the environment, a decomposition method is used to obtain the source time history. The main objective of this paper is to develop a decomposition method producing input time histories that can satisfy the SRS requirement based on the pyrotechnic shock test data measured from a mechanical impact test apparatus. At the heart of this decomposition method is the statistical representation of the pyrotechnic shock test data measured from the MIT Lincoln Laboratory (LL) designed Universal Pyrotechnic Shock Simulator (UPSS). Each pyrotechnic shock test data measured at the interface of a test unit has been analyzed to produce the temporal peak acceleration, Root Mean Square (RMS) acceleration, and the phase lag at each band center frequency. Maximum SRS of each filtered time history has been calculated to produce a relationship between the input and the response. Two new definitions are proposed as a result. The Peak Ratio (PR) is defined as the ratio between the maximum SRS and the temporal peak acceleration at each band center frequency. The ratio between the maximum SRS and the RMS acceleration is defined as the Energy Ratio (ER) at each band center frequency. Phase lag is estimated based on the time delay between the temporal peak acceleration at each band center frequency and the peak acceleration at the lowest band center frequency. This stochastic process has been applied to more than one hundred pyrotechnic shock test data to produce probabilistic definitions of the PR, ER, and the phase lag. The SRS is decomposed at each band center frequency using damped sinusoids with the PR and the decays obtained by matching the ER of the damped sinusoids to the ER of the test data. The final step in this stochastic SRS decomposition process is the Monte Carlo (MC

  18. Alternate methodologies to experimentally investigate shock initiation properties of explosives

    NASA Astrophysics Data System (ADS)

    Svingala, Forrest R.; Lee, Richard J.; Sutherland, Gerrit T.; Benjamin, Richard; Boyle, Vincent; Sickels, William; Thompson, Ronnie; Samuels, Phillip J.; Wrobel, Erik; Cornell, Rodger

    2017-01-01

    Reactive flow models are desired for new explosive formulations early in the development stage. Traditionally, these models are parameterized by carefully-controlled 1-D shock experiments, including gas-gun testing with embedded gauges and wedge testing with explosive plane wave lenses (PWL). These experiments are easy to interpret due to their 1-D nature, but are expensive to perform and cannot be performed at all explosive test facilities. This work investigates alternative methods to probe shock-initiation behavior of new explosives using widely-available pentolite gap test donors and simple time-of-arrival type diagnostics. These experiments can be performed at a low cost at most explosives testing facilities. This allows experimental data to parameterize reactive flow models to be collected much earlier in the development of an explosive formulation. However, the fundamentally 2-D nature of these tests may increase the modeling burden in parameterizing these models and reduce general applicability. Several variations of the so-called modified gap test were investigated and evaluated for suitability as an alternative to established 1-D gas gun and PWL techniques. At least partial agreement with 1-D test methods was observed for the explosives tested, and future work is planned to scope the applicability and limitations of these experimental techniques.

  19. Alternate Methods to Experimentally Investigate Shock Initiation Properties of Explosives

    NASA Astrophysics Data System (ADS)

    Svingala, Forrest; Lee, Richard; Sutherland, Gerrit; Samuels, Philip

    2015-06-01

    Reactive flow models are desired for many new explosives early in the formulation development stage. Traditionally, these models are parameterized by carefully-controlled 1-D shock experiments, including gas-gun testing with embedded gauges and wedge testing with explosive plane wave lenses (PWL). These experiments are easy to interpret, due to their 1-D nature, but are generally expensive to perform, and cannot be performed at all explosive test facilities. We investigate alternative methods to probe shock-initiation behavior of new explosives using widely-available pentolite gap test donors and simple time-of-arrival type diagnostics. These methods can be performed at a low cost at virtually any explosives testing facility, which allows experimental data to parameterize reactive flow models to be collected much earlier in the development of an explosive formulation. However, the fundamentally 2-D nature of these tests may increase the modeling burden in parameterizing these models, and reduce general applicability. Several variations of the so-called modified gap test were investigated and evaluated for suitability as an alternative to established 1-D gas gun and PWL techniques. At least partial agreement with 1-D test methods was observed for the explosives tested, and future work is planned to scope the applicability and limitations of these experimental techniques.

  20. Biodamage via shock waves initiated by irradiation with ions.

    PubMed

    Surdutovich, Eugene; Yakubovich, Alexander V; Solov'yov, Andrey V

    2013-01-01

    Radiation damage following the ionising radiation of tissue has different scenarios and mechanisms depending on the projectiles or radiation modality. We investigate the radiation damage effects due to shock waves produced by ions. We analyse the strength of the shock wave capable of directly producing DNA strand breaks and, depending on the ion's linear energy transfer, estimate the radius from the ion's path, within which DNA damage by the shock wave mechanism is dominant. At much smaller values of linear energy transfer, the shock waves turn out to be instrumental in propagating reactive species formed close to the ion's path to large distances, successfully competing with diffusion.

  1. Utilizing Computational Probabilistic Methods to Derive Shock Specifications in a Nondeterministic Environment

    SciTech Connect

    FIELD JR.,RICHARD V.; RED-HORSE,JOHN R.; PAEZ,THOMAS L.

    2000-10-25

    One of the key elements of the Stochastic Finite Element Method, namely the polynomial chaos expansion, has been utilized in a nonlinear shock and vibration application. As a result, the computed response was expressed as a random process, which is an approximation to the true solution process, and can be thought of as a generalization to solutions given as statistics only. This approximation to the response process was then used to derive an analytically-based design specification for component shock response that guarantees a balanced level of marginal reliability. Hence, this analytically-based reference SRS might lead to an improvement over the somewhat ad hoc test-based reference in the sense that it will not exhibit regions of conservativeness. nor lead to overtesting of the design.

  2. Growth rate of a shocked mixing layer with known initial perturbations [Mixing at shocked interfaces with known perturbations

    SciTech Connect

    Weber, Christopher R.; Cook, Andrew W.; Bonazza, Riccardo

    2013-05-14

    Here we derive a growth-rate model for the Richtmyer–Meshkov mixing layer, given arbitrary but known initial conditions. The initial growth rate is determined by the net mass flux through the centre plane of the perturbed interface immediately after shock passage. The net mass flux is determined by the correlation between the post-shock density and streamwise velocity. The post-shock density field is computed from the known initial perturbations and the shock jump conditions. The streamwise velocity is computed via Biot–Savart integration of the vorticity field. The vorticity deposited by the shock is obtained from the baroclinic torque with an impulsive acceleration. Using the initial growth rate and characteristic perturbation wavelength as scaling factors, the model collapses the growth-rate curves and, in most cases, predicts the peak growth rate over a range of Mach numbers (1.1 ≤Mi≤1.9), Atwood numbers (₋0.73 ≤ A ≤ ₋0.35 and 0.22 ≤ A ≤ 0.73), adiabatic indices (1.40/1.67≤γ12≤1.67/1.09) and narrow-band perturbation spectra. Lastly, the mixing layer at late times exhibits a power-law growth with an average exponent of θ=0.24.

  3. Development of transient initiating event frequencies for use in probabilistic risk assessments

    SciTech Connect

    Mackowiak, D.P.; Gentillon, C.D.; Smith, K.L.

    1985-05-01

    Transient initiating event frequencies are an essential input to the analysis process of a nuclear power plant probabilistic risk assessment. These frequencies describe events causing or requiring scrams. This report documents an effort to validate and update from other sources a computer-based data file developed by the Electric Power Research Institute (EPRI) describing such events at 52 United States commercial nuclear power plants. Operating information from the United States Nuclear Regulatory Commission on 24 additional plants from their date of commercial operation has been combined with the EPRI data, and the entire data base has been updated to add 1980 through 1983 events for all 76 plants. The validity of the EPRI data and data analysis methodology and the adequacy of the EPRI transient categories are examined. New transient initiating event frequencies are derived from the expanded data base using the EPRI transient categories and data display methods. Upper bounds for these frequencies are also provided. Additional analyses explore changes in the dominant transients, changes in transient outage times and their impact on plant operation, and the effects of power level and scheduled scrams on transient event frequencies. A more rigorous data analysis methodology is developed to encourage further refinement of the transient initiating event frequencies derived herein. Updating the transient event data base resulted in approx.2400 events being added to EPRI's approx.3000-event data file. The resulting frequency estimates were in most cases lower than those reported by EPRI, but no significant order-of-magnitude changes were noted. The average number of transients per year for the combined data base is 8.5 for pressurized water reactors and 7.4 for boiling water reactors.

  4. Probabilistic evaluation of initiation time in RC bridge beams with load-induced cracks exposed to de-icing salts

    SciTech Connect

    Lu Zhaohui; Zhao Yangang; Yu Zhiwu; Ding Faxing

    2011-03-15

    In this study, a reliability-based method for predicting the initiation time of reinforced concrete bridge beams with load-induced cracks exposed to de-icing salts is presented. A practical model for predicting the diffusion coefficient of chloride ingress into load-induced cracked concrete is proposed. Probabilistic information about uncertainties related to the surface chloride content and the threshold chloride concentration has been estimated from a wide review of previous experimental or statistical studies. Probabilistic analysis to estimate the time to corrosion initiation with/without considering the effect of the load-induced cracks on the chloride ingress into concrete has been carried out. Results of the analysis demonstrate the importance of considering the effect of the load-induced cracks for correct prediction of corrosion initiation in RC bridge beams exposed to chlorides.

  5. Shock initiation of the TATB based explosive PBX 9502 heated to ~ 76∘C

    NASA Astrophysics Data System (ADS)

    Gustavsen, Richard; Gehr, Russell; Bucholtz, Scott; Pacheco, Adam; Bartram, Brian

    2015-06-01

    Recently we reported on shock initiation of PBX 9502 (95 wt.% tri-amino-trinitro-benzene, 5 wt.% Kel-F800 binder) cooled to -55°C and to 77K Shock waves were generated by gas-gun driven plate impacts and reactive flow in the cooled PBX 9502 was measured with embedded electromagnetic gauges. Here we use similar methods to warm the explosive to ~ 76°C. The explosive sample is heated by warm air flowing through channels in an aluminum sample mounting plate and a copper tubing coil surrounding the sample. Temperature in the sample is monitored using six type-E thermocouples. Results show increased shock sensitivity; time and distance to detonation onset vs. initial shock pressure are shorter than when the sample is initially at ambient temperature. Our results are consistent with those reported by Dallman & Wackerle. Particle velocity wave profiles were also obtained during the shock-to-detonation transition and will be presented.

  6. Quantification of initial-data uncertainty on a shock-accelerated gas cylinder

    SciTech Connect

    Tritschler, V. K. Avdonin, A.; Hickel, S.; Hu, X. Y.; Adams, N. A.

    2014-02-15

    We quantify initial-data uncertainties on a shock accelerated heavy-gas cylinder by two-dimensional well-resolved direct numerical simulations. A high-resolution compressible multicomponent flow simulation model is coupled with a polynomial chaos expansion to propagate the initial-data uncertainties to the output quantities of interest. The initial flow configuration follows previous experimental and numerical works of the shock accelerated heavy-gas cylinder. We investigate three main initial-data uncertainties, (i) shock Mach number, (ii) contamination of SF{sub 6} with acetone, and (iii) initial deviations of the heavy-gas region from a perfect cylindrical shape. The impact of initial-data uncertainties on the mixing process is examined. The results suggest that the mixing process is highly sensitive to input variations of shock Mach number and acetone contamination. Additionally, our results indicate that the measured shock Mach number in the experiment of Tomkins et al. [“An experimental investigation of mixing mechanisms in shock-accelerated flow,” J. Fluid. Mech. 611, 131 (2008)] and the estimated contamination of the SF{sub 6} region with acetone [S. K. Shankar, S. Kawai, and S. K. Lele, “Two-dimensional viscous flow simulation of a shock accelerated heavy gas cylinder,” Phys. Fluids 23, 024102 (2011)] exhibit deviations from those that lead to best agreement between our simulations and the experiment in terms of overall flow evolution.

  7. Effect of initial perturbation amplitude on Richtmyer-Meshkov flows induced by strong shocks

    NASA Astrophysics Data System (ADS)

    Dell, Zachary; Stellingwerf, Robert; Abarzhi, Snezhana

    2015-11-01

    We study the effect initial perturbation on the Richtmyer-Meshkov (RM) flows induced by strong shocks in fluids with contrasting densities. Smooth Particle Hydrodynamics simulations are employed. Broad range of shock strengths and density ratios is considered (Mach=3,5,10, and Atwood=0.6,0.8,0.95). The amplitude of initial single mode sinusoidal perturbation of the interface varies from 0% to 100% of its wavelength. We analyze the initial growth-rate of the RMI immediately after the shock passage, when the perturbation amplitude increases linearly with time. We find that the initial growth-rate of RMI is a non-monotone function of the amplitude of the initial perturbation. This restrains the amount of energy that can be deposited by the shock at the interface. The maximum value of the initial growth-rate depends strongly and the corresponding value of the initial perturbation amplitude depends only slightly on the shock strength and density ratio. The maximum value of the initial growth-rate increases with the increase of the Atwood number for a fixed Mach number, and decreases with the increase of the Mach number for a fixed Atwood number. We argue that the non-monotonicity of RMI growth-rate is a result of a combination of geometric effect and the effect of secondary shocks.

  8. Effect of initial perturbation amplitude on Richtmyer-Meshkov flows induced by strong shocks

    NASA Astrophysics Data System (ADS)

    Dell, Zachary; Stellingwerf, Robert; Abarzhi, Snezhana

    2015-11-01

    We study the effect initial perturbation on the Richtmyer-Meshkov (RM) flows induced by strong shocks in fluids with contrasting densities. Smooth Particle Hydrodynamics simulations are employed. Broad range of shock strengths and density ratios is considered (Mach=3,5,10, and Atwood=0.6,0.8,0.95). The amplitude of initial single mode sinusoidal perturbation of the interface varies from 0% to 100% of its wavelength. We analyze the initial growth-rate of the RMI immediately after the shock passage, when the perturbation amplitude increases linearly with time. We find that the initial growth-rate of RMI is a non-monotone function of the amplitude of the initial perturbation. This restrains the amount of energy that can be deposited by the shock at the interface. The maximum value of the initial growth-rate depends strongly and the corresponding value of the initial perturbation amplitude depends only slightly on the shock strength and density ratio. The maximum value of the initial growth-rate increases with the increase of the Atwood number for a fixed Mach number, and decreases with the increase of the Mach number for a fixed Atwood number. We argue that the non-monotonicity of RMI growth-rate is a result of a combination of geometric effect and the effect of secondary shocks. Support of the National Science Foundation is warmly appreciated.

  9. Numerical analysis of initial stage of thermal shock

    NASA Astrophysics Data System (ADS)

    Demidov, V. N.

    2016-07-01

    The paper studies a problem of a thermal shock at the surface of a half-space, which properties are described by elastic-plastic model taking into account dynamic effects, heat inertia, coupling between thermal and mechanical fields. The problem is solved numerically using finite-difference method of S.K. Godunov.

  10. Effect of initial perturbation amplitude on Richtmyer-Meshkov flows induced by strong shocks

    SciTech Connect

    Dell, Z.; Abarzhi, S. I. E-mail: sabarji@andrew.cmu.edu; Stellingwerf, R. F.

    2015-09-15

    We systematically study the effect of the initial perturbation on Richtmyer-Meshkov (RM) flows induced by strong shocks in fluids with contrasting densities. Smooth Particle Hydrodynamics simulations are employed. A broad range of shock strengths and density ratios is considered. The amplitude of the initial single mode sinusoidal perturbation of the interface varies from 0% to 100% of its wavelength. The simulations results are compared, wherever possible, with four rigorous theories, and with other experiments and simulations, achieving good quantitative and qualitative agreement. Our study is focused on early time dynamics of the Richtmyer-Meshkov instability (RMI). We analyze the initial growth-rate of RMI immediately after the shock passage, when the perturbation amplitude increases linearly with time. For the first time, to the authors' knowledge, we find that the initial growth-rate of RMI is a non-monotone function of the initial perturbation amplitude, thus restraining the amount of energy that can be deposited by the shock at the interface. The maximum value of the initial growth-rate depends on the shock strength and the density ratio, whereas the corresponding value of the initial perturbation amplitude depends only slightly on the shock strength and density ratio.

  11. An initial probabilistic hazard assessment of oil dispersants approved by the United States National Contingency Plan.

    PubMed

    Berninger, Jason P; Williams, E Spencer; Brooks, Bryan W

    2011-07-01

    Dispersants are commonly applied during oil spill mitigation efforts; however, these industrial chemicals may present risks to aquatic organisms individually and when mixed with oil. Fourteen dispersants are listed on the U.S. Environmental Protection Agency (U.S. EPA) National Oil and Hazardous Substances Pollution Contingency Plan (NCP). Availability of environmental effects information for such agents is limited, and individual components of dispersants are largely proprietary. Probabilistic hazard assessment approaches including Chemical Toxicity Distributions (CTDs) may be useful as an initial step toward prioritizing environmental hazards from the use of dispersants. In the present study, we applied the CTD approach to two acute toxicity datasets: NCP (the contingency plan dataset) and DHOS (a subset of NCP listed dispersants reevaluated subsequent to the Deepwater Horizon oil spill). These datasets contained median lethal concentration (LC50) values for dispersants alone and dispersant:oil mixtures, in two standard marine test species, Menidia beryllina and Mysidopsis bahia. These CTDs suggest that dispersants alone are generally less toxic than oil. In contrast, most dispersant:oil mixtures are more toxic than oil alone. For the two datasets (treated separately because of differing methodologies), CTDs would predict 95% of dispersant:oil mixtures to have acute toxicity values above 0.32 and 0.76 mg/L for Mysidopsis and 0.33 mg/L and 1.06 mg/L for Menidia (for DHOS and NCP, respectively). These findings demonstrate the utility of CTDs as a means to evaluate the comparative ecotoxicity of dispersants alone and in mixture with different oil types. The approaches presented here also provide valuable tools for prioritizing prospective and retrospective environmental assessments of oil dispersants.

  12. Shock initiation of explosives: Temperature spikes and growth spurts

    NASA Astrophysics Data System (ADS)

    Bassett, Will P.; Dlott, Dana D.

    2016-08-01

    When energetic materials are subjected to high-velocity impacts, the first steps in the shock-to-detonation transition are the creation, ignition, and growth of hot spots. We used 1-3.2 km s-1 laser-launched flyer plates to impact powdered octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine, a powerful explosive, and monitored hundreds of emission bursts with an apparatus that determined temperature and emissivity at all times. The time-dependent volume fraction of hot spots was determined by measuring the time-dependent emissivity. After the shock, most hot spots extinguished, but the survivors smoldered for hundreds of nanoseconds until their temperatures spiked, causing a hot spot growth spurt. Depending on the impact duration, the growth spurts could be as fast as 300 ns and as slow as 13 μs.

  13. Time-resolved study of laser initiated shock wave propagation in superfluid 4He

    NASA Astrophysics Data System (ADS)

    Garcia, Allan; Buelna, Xavier; Popov, Evgeny; Eloranta, Jussi

    2016-09-01

    Intense shock waves in superfluid 4He between 1.7 and 2.1 K are generated by rapidly expanding confined plasma from laser ablation of a metal target immersed in the liquid. The resulting shock fronts in the liquid with initial velocities up to ca. Mach 10 are visualized by time-resolved shadowgraph photography. These high intensity shocks decay within 500 ns into less energetic shock waves traveling at Mach 2, which have their lifetime in the microsecond time scale. Based on the analysis using the classical Rankine-Hugoniot theory, the shock fronts created remain in the solid phase up to 1 μs and the associated thermodynamic state appears outside the previously studied region. The extrapolated initial shock pressure of 0.5 GPa is comparable to typical plasma pressures produced during liquid phase laser ablation. A secondary shock originating from fast heat propagation on the metal surface is also observed and a lower limit estimate for the heat propagation velocity is measured as 7 × 104 m/s. In the long-time limit, the high intensity shocks turn into liquid state waves that propagate near the speed of sound.

  14. Electromagnetic gauge measurements of shock initiating PBX9501 and PBX9502 explosives

    SciTech Connect

    Sheffield, S.A.; Gustavsen, R.L.; Hill, L.G.; Alcon, R.R.

    1998-12-31

    The authors have used an embedded electromagnetic particle velocity gauge technique to measure the shock initiation behavior in PBX9501 and PBX9502 explosives. Experiments have been conducted in which up to twelve separate measurements have been made in a single experiment which detail the growth from an input shock to a detonation. In addition, another gauge element called a shock tracker has been used to monitor the progress of the shock front as a function of time, thus providing a position-time trajectory of the wave front as it moves through the explosive sample. This provides similar data to that obtained in a traditional wedge test and is used to determine the position and time that the wave attains detonation. Data on both explosives show evidence of heterogeneous initiation (growth in the front) and homogeneous initiation (growth behind the front) with the PBX9502 showing more Heterogeneous behavior and the PBX 9501 showing more homogeneous behavior.

  15. Shock Initiation of New and Aged PBX 9501 Measured with Embedded Electromagnetic Particle Velocity Gauges

    SciTech Connect

    L. G. Hill; R. L. Gustavsen; R. R. Alcon; S. A. Sheffield

    1999-09-01

    We have used an embedded electromagnetic particle velocity gauge technique to measure the shock initiation behavior in PBX 9501 explosive. Up to twelve separate particle velocity wave profile measurements have been made at different depths in a single experiment. These detail the growth from an input shock to a detonation. In addition, another gauge element called a ''shock tracker'' has been used to monitor the progress of the shock front as a function of time and position as it moves through the explosive sample. This provides data similar to that obtained in a traditional explosively driven wedge test and is used to determine the position and time that the wave attains detonation. Run distance-to-detonation vs. input pressure (Pop-plot) data and particle velocity wave profile data have been obtained on new PBX 9501 pressed to densities of 1.826, 1.830, and 1.837 g/cm{sup 3}. In addition, the same measurements were performed on aged material recovered from dismantled W76 and W78 weapons. The input pressure range covered was 3.0 to 5.2 GPa. All results to date show shock sensitivity to be a function only of the initial density and not of age. PBX 9501 shock initiates the same after 17 years in stockpile as it does on the day it is pressed. Particle velocity wave profiles show mixed heterogeneous initiation (growth in the front) and homogeneous initiation (growth behind the front).

  16. A cumulative shear mechanism for tissue damage initiation in shock-wave lithotripsy

    PubMed Central

    Freund, Jonathan B.; Colonius, Tim; Evan, Andrew P.

    2007-01-01

    Evidence suggests that inertial cavitation plays an important role in the renal injury incurred during shock-wave lithotripsy. However, it is unclear how tissue damage is initiated, and significant injury typically occurs only after a sufficient dose of shock waves. While it has been suggested that shock-induced shearing might initiate injury, estimates indicate that individual shocks do not produce sufficient shear to do so. In this paper, we hypothesize that the cumulative shear of the many shocks is damaging. This mechanism depends upon whether there is sufficient time between shocks for tissue to relax to its unstrained state. We investigate the mechanism with a physics-based simulation model wherein the the basement membranes that define the tubules and vessels in the inner medulla are represented as elastic shells surrounded by viscous fluid. Material properties are estimated from in vitro tests of renal basement membranes and documented mechanical properties of cells and extracellular gels. Estimates for the net shear deformation from a typical lithotripter shock (~ 0.1%) are found from a separate dynamic shock simulation. The results suggest that the larger interstitial volume (~ 40%) near the papilla tip gives the tissue there a relaxation time comparable to clinical shock delivery rates (~ 1Hz), thus allowing shear to accumulate. Away from the papilla tip, where the interstitial volume is smaller (≲ 20%), the model tissue relaxes completely before the next shock would be delivered. Implications of the model are that slower delivery rates and broader focal zones should both decrease injury, consistent with some recent observations. PMID:17507147

  17. An evaluation of the reliability and usefulness of external-initiator PRA (probabilistic risk analysis) methodologies

    SciTech Connect

    Budnitz, R.J.; Lambert, H.E. )

    1990-01-01

    The discipline of probabilistic risk analysis (PRA) has become so mature in recent years that it is now being used routinely to assist decision-making throughout the nuclear industry. This includes decision-making that affects design, construction, operation, maintenance, and regulation. Unfortunately, not all sub-areas within the larger discipline of PRA are equally mature,'' and therefore the many different types of engineering insights from PRA are not all equally reliable. 93 refs., 4 figs., 1 tab.

  18. Shock initiated reactions of reactive multi-phase blast explosives

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis; Granier, John; Johnson, Richard; Littrell, Donald

    2017-01-01

    This paper describes a new class of non-ideal explosive compositions made of perfluoropolyether (PFPE), nanoaluminum, and a micron-size, high mass density, reactive metal. Unlike high explosives, these compositions release energy via a fast self-oxidized combustion wave rather than a true self-sustaining detonation. Their reaction rates are shock dependent and they can be overdriven to change their energy release rate. These compositions are fuel rich and have an extended aerobic energy release phase. The term "reactive multiphase blast" refers to the post-dispersion blast behavior: multiphase in that there are a gas phase that imparts pressure and a solid (particulate) phase that imparts energy and momentum [1]; and reactive in that the hot metal particles react with atmospheric oxygen and the explosive gas products to give an extended pressure pulse. Tantalum-based RMBX formulations were tested in two spherical core-shell configurations - an RMBX shell exploded by a high explosive core, and an RMBX core imploded by a high explosive shell. The fireball and blast characteristics were compared to a C-4 baseline charge.

  19. Shock initiation experiments with ignition and growth modeling on low density composition B

    NASA Astrophysics Data System (ADS)

    Vandersall, Kevin S.; Garcia, Frank; Tarver, Craig M.

    2017-01-01

    Shock initiation experiments on low density (˜1.2 and ˜1.5 g/cm3) Composition B were performed to obtain in-situ pressure gauge data, characterize the run-distance-to-detonation behavior, and provide a basis for Ignition and Growth reactive flow modeling. A 101 mm diameter gas gun was utilized to initiate the explosive charges with manganin piezoresistive pressure gauge packages placed between packed layers (˜1.2 g/cm3) confined in Teflon rings or sample disks pressed to low density (˜1.5 g/cm3). The shock sensitivity was found to increase with decreasing density as expected. Ignition and Growth model parameters were derived that yielded reasonable agreement with the experimental data at both initial densities. The shock sensitivity at the tested densities are compared to prior published work with near full density material.

  20. Large Area and Short-Pulse Shock Initiation of a Tatb/hmx Mixed Explosive

    NASA Astrophysics Data System (ADS)

    Guiji, Wang; Chengwei, Sun; Jun, Chen; Cangli, Liu; Jianheng, Zhao; Fuli, Tan; Ning, Zhang

    2007-12-01

    The large area and short-pulse shock initiation experiments on the plastic bonded mixed explosive of TATB(80%) and HMX(15%) have been performed with an electric gun where a Mylar flyer of 10-19 mm in diameter and 0.05˜0.30 mm in thickness was launched by an electrically exploding metallic bridge foil. The cylindrical explosive specimens (Φ16 mm×8 mm in size) were initiated by the Mylar flyers in thickness of 0.07˜0.20 mm, which induced shock pressure in specimen was of duration ranging from 0.029 to 0.109 μs. The experimental data were treated with the DRM(Delayed Robbins-Monro) procedure and to provide the initiation threshold of flyer velocities at 50% probability are 3.398˜1.713 km/s and that of shock pressure P 13.73˜5.23 GPa, respectively for different pulse durations. The shock initiation criteria of the explosive specimen at 50% and 100% probabilities are yielded. In addition, the 30° wedged sample was tested and the shock to detonation transition (SDT) process emerging on its inclined surface was diagnosed with a device consisting of multiple optical fiber probe, optoelectronic transducer and digital oscilloscope. The POP plot of the explosive has been gained from above SDT data.

  1. Mesoscale modelling of shock initiation in HMX-based explosives

    SciTech Connect

    Swift, D. C.; Mulford, R. N. R.; Winter, R. E.; Taylor, P.; Salisbury, D. A.; Harris, E. J.

    2002-01-01

    Motivation: predictive capability Want to predict initiation, detonics and performance given: {sm_bullet} Variations in composition {sm_bullet} Variations in morphology {sm_bullet}Different loading conditions Previous work on PBX and ANFO: need physically-based model rather than just mechanical calibrations

  2. Shock initiation of 2,4-dinitroimidazole (2,4-DNI)

    SciTech Connect

    Urtiew, P.A.; Tarver, C.M.; Simpson, R.L.

    1995-07-19

    The shock sensitivity of the pressed solid explosive 2,4-dinitroimidazole (2,4-DNI) was determined using the embedded manganin pressure gauge technique. At an initial shock pressure of 2 GPa, several microseconds were required before any exothermic reaction was observed. At 4 GPa, 2,4-DNI reacted more rapidly but did not transition to detonation at the 12 mm deep gauge position. At 6 GPa, detonation occurred in less than 6 mm of shock propagation. Thus, 2,4-DNI is more shock sensitive than TATB-based explosives but is considerably less shock sensitive than HMX-based explosives. An Ignition and Growth reactive flow model for 2,4-DNI based on these gauge records showed that 2,4-DNI exhibits shock initiation characteristics similar to TATB but reacts faster. The chemical structure of 2,4-DNI suggests that it may exhibit thermal decomposition reactions similar to nitroguanine and explosives with similar ring structures, such as ANTA and NTO.

  3. Effect of Pressure Gradients on the Initiation of PBX-9502 via Irregular (Mach) Reflection of Low Pressure Curved Shock Waves

    SciTech Connect

    Hull, Lawrence Mark; Miller, Phillip Isaac; Moro, Erik Allan

    2016-11-28

    In the instance of multiple fragment impact on cased explosive, isolated curved shocks are generated in the explosive. These curved shocks propagate and may interact and form irregular or Mach reflections along the interaction loci, thereby producing a single shock that may be sufficient to initiate PBX-9501. However, the incident shocks are divergent and their intensity generally decreases as they expand, and the regions behind the Mach stem interaction loci are generally unsupported and allow release waves to rapidly affect the flow. The effects of release waves and divergent shocks may be considered theoretically through a “Shock Change Equation”.

  4. Large Area and Short Pulsed Shock Initiation of A TATB/HMX Mixed Explosive

    NASA Astrophysics Data System (ADS)

    Wang, Guiji; Sun, Chengwei; Chen, Jun; Liu, Cangli; Tan, Fuli; Zhang, Ning

    2007-06-01

    The large area and short pulsed shock initiation experiment on a plastic bonded mixed explosive of TATB(80%) and HMX(15%) has been performed with an electric gun where a mylar flyer of 19mm in diameter and 0.05˜0.30mm in thickness is launched by an electrically exploding metallic bridge foil. The cylindrical explosive specimens (φ16mm x 8mm in size) were initiated by the mylar flyers in thickness of 0.07˜0.20mm, which induced shock pressure in specimen was of duration ranging 0.029˜0.109μs. The experimental data were treated with the DRM(Delayed Robbins-Monro) procedure and to provide the threshold of shock pressure P 13.73˜5.23GPa. The shock initiation criterion of the explosive specimen is (P/GPa)^1.451(τ/μs) = 1.2. Meanwhile the criterion in 100% probability in the experiment is (P/GPa)^1.8(τ/μs) = 2.63. In addition, the 30^o wedged specimen was tested and the shock to detonation transition (SDT) process emerging on its inclined surface was diagnosed with a device consisting of multiple optical fiber probe, optoelectronic transducer and digital oscilloscope. The POP plot of the explosive has been gained from above SDT data.

  5. Study of void collapse leading to shock initiation and ignition in heterogeneous energetic material

    NASA Astrophysics Data System (ADS)

    Rai, Nirmal Kumar; Koundinyan, Sushilkumar Prabu; Udaykumar, H. S.

    2015-06-01

    In heterogeneous energetic materials like PBX, porosity plays an important role in shock initiation and ignition. This is because the collapse of voids leads to the formation of local high temperature regions termed as hot spots under the application of shock loading. The formation of hot spots can take place because of several mechanisms such as plastic deformation of voids, hydrodynamic impact on voids leading to the formation of high speed material jets etc. Once these hot spots are formed, they can lead to reaction and ignition in the explosive material. However, diffusive phenomenon like heat conduction can play an important role in shock initiation because depending on the size and intensity of void collapse hot spots, local ignition conditions can be smeared out. In the current work, void collapse leading to shock initiation and ignition in HMX has been studied using a massively parallel Eulerian code, SCIMITAR3D. The chemical kinetics of HMX decomposition and reaction has been modeled using the Henson-Smilowitz multi-step mechanism. Based on the current framework an ignition criterion has been established for single void collapse analysis for various shock strengths. Furthermore, the effects of void-void interactions have been analyzed demonstrating the important role of the combination of void fraction, reaction chemistry and heat conduction in determining the ignition threshold. This work has been funded from the AFRL-RWPC, Computational Mechanics Branch, Eglin AFB, Program Manager: Dr. Martin Schmidt.

  6. Plantar Purpura as the Initial Presentation of Viridians Streptococcal Shock Syndrome Secondary to Streptococcus gordonii Bacteremia

    PubMed Central

    Liao, Chen-Yi; Su, Kuan-Jen; Lin, Cheng-Hui; Huang, Shu-Fang; Chin, Hsien-Kuo; Chang, Chin-Wen; Kuo, Wu-Hsien; Ben, Ren-Jy; Yeh, Yen-Cheng

    2016-01-01

    Viridians streptococcal shock syndrome is a subtype of toxic shock syndrome. Frequently, the diagnosis is missed initially because the clinical features are nonspecific. However, it is a rapidly progressive disease, manifested by hypotension, rash, palmar desquamation, and acute respiratory distress syndrome within a short period. The disease course is generally fulminant and rarely presents initially as a purpura over the plantar region. We present a case of a 54-year-old female hospital worker diagnosed with viridians streptococcal shock syndrome caused by Streptococcus gordonii. Despite aggressive antibiotic treatment, fluid hydration, and use of inotropes and extracorporeal membrane oxygenation, the patient succumbed to the disease. Early diagnosis of the potentially fatal disease followed by a prompt antibiotic regimen and appropriate use of steroids are cornerstones in the management of this disease to reduce the risk of high morbidity and mortality. PMID:27366188

  7. Effects of Initial Condition Spectral Content on Shock Driven-Turbulent Mixing

    SciTech Connect

    Nelson, Nicholas James; Grinstein, Fernando F.

    2015-07-15

    The mixing of materials due to the Richtmyer-Meshkov instability and the ensuing turbulent behavior is of intense interest in a variety of physical systems including inertial confinement fusion, combustion, and the final stages of stellar evolution. Extensive numerical and laboratory studies of shock-driven mixing have demonstrated the rich behavior associated with the onset of turbulence due to the shocks. Here we report on progress in understanding shock-driven mixing at interfaces between fluids of differing densities through three-dimensional (3D) numerical simulations using the RAGE code in the implicit large eddy simulation context. We consider a shock-tube configuration with a band of high density gas (SF6) embedded in low density gas (air). Shocks with a Mach number of 1.26 are passed through SF6 bands, resulting in transition to turbulence driven by the Richtmyer-Meshkov instability. The system is followed as a rarefaction wave and a reflected secondary shock from the back wall pass through the SF6 band. We apply a variety of initial perturbations to the interfaces between the two fluids in which the physical standard deviation, wave number range, and the spectral slope of the perturbations are held constant, but the number of modes initially present is varied. By thus decreasing the density of initial spectral modes of the interface, we find that we can achieve as much as 25% less total mixing at late times. This has potential direct implications for the treatment of initial conditions applied to material interfaces in both 3D and reduced dimensionality simulation models.

  8. Effects of initial condition spectral content on shock-driven turbulent mixing.

    PubMed

    Nelson, Nicholas J; Grinstein, Fernando F

    2015-07-01

    The mixing of materials due to the Richtmyer-Meshkov instability and the ensuing turbulent behavior is of intense interest in a variety of physical systems including inertial confinement fusion, combustion, and the final stages of stellar evolution. Extensive numerical and laboratory studies of shock-driven mixing have demonstrated the rich behavior associated with the onset of turbulence due to the shocks. Here we report on progress in understanding shock-driven mixing at interfaces between fluids of differing densities through three-dimensional (3D) numerical simulations using the rage code in the implicit large eddy simulation context. We consider a shock-tube configuration with a band of high density gas (SF(6)) embedded in low density gas (air). Shocks with a Mach number of 1.26 are passed through SF(6) bands, resulting in transition to turbulence driven by the Richtmyer-Meshkov instability. The system is followed as a rarefaction wave and a reflected secondary shock from the back wall pass through the SF(6) band. We apply a variety of initial perturbations to the interfaces between the two fluids in which the physical standard deviation, wave number range, and the spectral slope of the perturbations are held constant, but the number of modes initially present is varied. By thus decreasing the density of initial spectral modes of the interface, we find that we can achieve as much as 25% less total mixing at late times. This has potential direct implications for the treatment of initial conditions applied to material interfaces in both 3D and reduced dimensionality simulation models.

  9. Effects of initial condition spectral content on shock-driven turbulent mixing

    NASA Astrophysics Data System (ADS)

    Nelson, Nicholas J.; Grinstein, Fernando F.

    2015-07-01

    The mixing of materials due to the Richtmyer-Meshkov instability and the ensuing turbulent behavior is of intense interest in a variety of physical systems including inertial confinement fusion, combustion, and the final stages of stellar evolution. Extensive numerical and laboratory studies of shock-driven mixing have demonstrated the rich behavior associated with the onset of turbulence due to the shocks. Here we report on progress in understanding shock-driven mixing at interfaces between fluids of differing densities through three-dimensional (3D) numerical simulations using the rage code in the implicit large eddy simulation context. We consider a shock-tube configuration with a band of high density gas (SF6) embedded in low density gas (air). Shocks with a Mach number of 1.26 are passed through SF6 bands, resulting in transition to turbulence driven by the Richtmyer-Meshkov instability. The system is followed as a rarefaction wave and a reflected secondary shock from the back wall pass through the SF6 band. We apply a variety of initial perturbations to the interfaces between the two fluids in which the physical standard deviation, wave number range, and the spectral slope of the perturbations are held constant, but the number of modes initially present is varied. By thus decreasing the density of initial spectral modes of the interface, we find that we can achieve as much as 25% less total mixing at late times. This has potential direct implications for the treatment of initial conditions applied to material interfaces in both 3D and reduced dimensionality simulation models.

  10. Effects of Initial Condition Spectral Content on Shock Driven-Turbulent Mixing

    DOE PAGES

    Nelson, Nicholas James; Grinstein, Fernando F.

    2015-07-15

    The mixing of materials due to the Richtmyer-Meshkov instability and the ensuing turbulent behavior is of intense interest in a variety of physical systems including inertial confinement fusion, combustion, and the final stages of stellar evolution. Extensive numerical and laboratory studies of shock-driven mixing have demonstrated the rich behavior associated with the onset of turbulence due to the shocks. Here we report on progress in understanding shock-driven mixing at interfaces between fluids of differing densities through three-dimensional (3D) numerical simulations using the RAGE code in the implicit large eddy simulation context. We consider a shock-tube configuration with a band ofmore » high density gas (SF6) embedded in low density gas (air). Shocks with a Mach number of 1.26 are passed through SF6 bands, resulting in transition to turbulence driven by the Richtmyer-Meshkov instability. The system is followed as a rarefaction wave and a reflected secondary shock from the back wall pass through the SF6 band. We apply a variety of initial perturbations to the interfaces between the two fluids in which the physical standard deviation, wave number range, and the spectral slope of the perturbations are held constant, but the number of modes initially present is varied. By thus decreasing the density of initial spectral modes of the interface, we find that we can achieve as much as 25% less total mixing at late times. This has potential direct implications for the treatment of initial conditions applied to material interfaces in both 3D and reduced dimensionality simulation models.« less

  11. Shock

    MedlinePlus

    ... Many organs can be damaged as a result. Shock requires immediate treatment and can get worse very rapidly. As many 1 in 5 people who suffer shock will die from it. Considerations The main types ...

  12. SHOCK INITIATION EXPERIMENTS AND MODELING OF COMPOSITION B AND C-4

    SciTech Connect

    Urtiew, P A; Vandersall, K S; Tarver, C M; Garcia, F; Forbes, J W

    2006-06-13

    Shock initiation experiments on the explosives Composition B and C-4 were performed to obtain in-situ pressure gauge data for the purpose of determining the Ignition and Growth reactive flow model with proper modeling parameters. A 101 mm diameter propellant driven gas gun was utilized to initiate the explosive charges containing manganin piezoresistive pressure gauge packages embedded in the explosive sample. Experimental data provided new information on the shock velocity versus particle velocity relationship for each of the investigated materials in their respective pressure range. The run-distance-to-detonation points on the Pop-plot for these experiments showed agreement with previously published data, and Ignition and Growth modeling calculations resulted in a good fit to the experimental data. These experimental data were used to determine Ignition and Growth reactive flow model parameters for these explosives. Identical ignition and growth reaction rate parameters were used for C-4 and Composition B, and the Composition B model also included a third reaction rate to simulate the completion of reaction by the TNT component. The Composition B model was then tested on existing short pulse duration, gap test, and projectile impact shock initiation with good results. This Composition B model can be applied to shock initiation scenarios that have not or cannot be tested experimentally with a high level of confidence in its predictions.

  13. Short pulse duration shock initiation experiments plus ignition and growth modeling on Composition B

    NASA Astrophysics Data System (ADS)

    May, Chadd M.; Tarver, Craig M.

    2014-05-01

    Composition B (63% RDX, 36% TNT, 1% wax) is still a widely used energetic material whose shock initiation characteristics are necessary to understand. It is now possible to shock initiate Composition B and other secondary explosives at diameters well below their characteristic failure diameters for unconfined self-sustaining detonation. This is done using very high velocity, very thin, small diameter flyer plates accelerated by electric or laser power sources. Recently experimental detonation versus failure to detonate threshold flyer velocity curves for Composition B using several KaptonTM flyer thicknesses and diameters were measured. Flyer plates with diameters of 2 mm successfully detonated Composition B, which has a nominal failure diameter of 4.3 mm. The shock pressures required for these initiations are greater than the Chapman-Jouguet (C-J) pressure in self-sustaining Composition B detonation waves. The initiation process is two-dimensional, because both rear and side rarefactions can affect the shocked Composition B reaction rates. The Ignition and Growth reactive flow model for Composition B is extended to yield accurate simulations of this new threshold velocity data for various flyer thicknesses.

  14. Modeling the shock initiation of PBX 9501 in ALE3D

    SciTech Connect

    Mace, Jonathan; Mas, Eric M; Leininger, Lara; Springer, H Keo

    2008-01-01

    The SMIS (Specific Munitions Impact Scenario) experimental series performed at Los Alamos National Laboratory has determined the 3-dimensional shock initiation behavior of the HMX based heterogeneous high explosive, PBX9501, which has a PMMA case and a steel impact cover. The SMIS real-world shot scenario creates a unique test-bed because many of the fragments arrive at the impact plate off-center and at an angle of impact. The goal of this model validation experiments is to demonstrate the predictive capability of the Tarver-Lee Ignition and Growth (I&G) reactive flow model in this fully 3-dimensional regime of Shock to Detonation Transition (SDT).

  15. Initial conditions and modeling for simulations of shock driven turbulent material mixing

    SciTech Connect

    Grinstein, Fernando F.

    2016-11-17

    Here, we focus on the simulation of shock-driven material mixing driven by flow instabilities and initial conditions (IC). Beyond complex multi-scale resolution issues of shocks and variable density turbulence, me must address the equally difficult problem of predicting flow transition promoted by energy deposited at the material interfacial layer during the shock interface interactions. Transition involves unsteady large-scale coherent-structure dynamics capturable by a large eddy simulation (LES) strategy, but not by an unsteady Reynolds-Averaged Navier–Stokes (URANS) approach based on developed equilibrium turbulence assumptions and single-point-closure modeling. On the engineering end of computations, such URANS with reduced 1D/2D dimensionality and coarser grids, tend to be preferred for faster turnaround in full-scale configurations.

  16. Initial conditions and modeling for simulations of shock driven turbulent material mixing

    DOE PAGES

    Grinstein, Fernando F.

    2016-11-17

    Here, we focus on the simulation of shock-driven material mixing driven by flow instabilities and initial conditions (IC). Beyond complex multi-scale resolution issues of shocks and variable density turbulence, me must address the equally difficult problem of predicting flow transition promoted by energy deposited at the material interfacial layer during the shock interface interactions. Transition involves unsteady large-scale coherent-structure dynamics capturable by a large eddy simulation (LES) strategy, but not by an unsteady Reynolds-Averaged Navier–Stokes (URANS) approach based on developed equilibrium turbulence assumptions and single-point-closure modeling. On the engineering end of computations, such URANS with reduced 1D/2D dimensionality and coarsermore » grids, tend to be preferred for faster turnaround in full-scale configurations.« less

  17. Manganin Gauge and Reactive Flow Modeling Study of the Shock Initiation of PBX 9501

    SciTech Connect

    Tarver, C M; Forbes, J W; Garcia, F; Urtiew, P A

    2001-06-05

    A series of 101mm diameter gas gun experiments was fired using manganin pressure gauges embedded in the HMX-based explosive PBX 9501 at initial temperatures of 20 C and 50 C. Flyer plate impact velocities were chosen to produce impact pressure levels in PBX 9501 at which the growth of explosive reaction preceding detonation was measured on most of the gauges and detonation pressure profiles were recorded on some of the gauges placed deepest into the explosive targets. All measured pressure histories for initial temperatures of 25 C and 50 C were essentially identical. Measured run distances to detonation at several input shock pressures agreed with previous results. An existing ignition and growth reactive flow computer model for shock initiation and detonation of PBX 9501, which was developed based on LANL embedded particle velocity gauge data, was tested on these pressure gauge results. The agreement was excellent, indicating that the embedded pressure and particle velocity gauge techniques yielded consistent results.

  18. Multiphysics Simulations of Hot-Spot Initiation in Shocked Insensitive High-Explosive

    NASA Astrophysics Data System (ADS)

    Najjar, Fady; Howard, W. M.; Fried, L. E.

    2010-11-01

    Solid plastic-bonded high-explosive materials consist of crystals with micron-sized pores embedded. Under mechanical or thermal insults, these voids increase the ease of shock initiation by generating high-temperature regions during their collapse that might lead to ignition. Understanding the mechanisms of hot-spot initiation has significant research interest due to safety, reliability and development of new insensitive munitions. Multi-dimensional high-resolution meso-scale simulations are performed using the multiphysics software, ALE3D, to understand the hot-spot initiation. The Cheetah code is coupled to ALE3D, creating multi-dimensional sparse tables for the HE properties. The reaction rates were obtained from MD Quantum computations. Our current predictions showcase several interesting features regarding hot spot dynamics including the formation of a "secondary" jet. We will discuss the results obtained with hydro-thermo-chemical processes leading to ignition growth for various pore sizes and different shock pressures.

  19. Development of Simplified Probabilistic Risk Assessment Model for Seismic Initiating Event

    SciTech Connect

    S. Khericha; R. Buell; S. Sancaktar; M. Gonzalez; F. Ferrante

    2012-06-01

    ABSTRACT This paper discusses a simplified method to evaluate seismic risk using a methodology built on dividing the seismic intensity spectrum into multiple discrete bins. The seismic probabilistic risk assessment model uses Nuclear Regulatory Commission’s (NRC’s) full power Standardized Plant Analysis Risk (SPAR) model as the starting point for development. The seismic PRA models are integrated with their respective internal events at-power SPAR model. This is accomplished by combining the modified system fault trees from the full power SPAR model with seismic event tree logic. The peak ground acceleration is divided into five bins. The g-value for each bin is estimated using the geometric mean of lower and upper values of that particular bin and the associated frequency for each bin is estimated by taking the difference between upper and lower values of that bin. The component’s fragilities are calculated for each bin using the plant data, if available, or generic values of median peak ground acceleration and uncertainty values for the components. For human reliability analysis (HRA), the SPAR HRA (SPAR-H) method is used which requires the analysts to complete relatively straight forward worksheets that include the performance shaping factors (PSFs). The results are then used to estimate human error probabilities (HEPs) of interest. This work is expected to improve the NRC’s ability to include seismic hazards in risk assessments for operational events in support of the reactor oversight program (e.g., significance determination process).

  20. A numerical study of initial-stage interaction between shock and particle curtain

    NASA Astrophysics Data System (ADS)

    Deng, Xiaolong; Jiang, Lingjie

    2016-11-01

    High speed particulate flow appears in many scientific and engineering problems. Wagner et al., 2012 studied the planar shock - particle curtain interaction experimentally, found the movement and expansion of the particle curtain, together with the movement of shock waves. Theofanous et al., 2016 did similar experiments, discovered a time scaling that reveals a universal regime for cloud expansion. In these experiments, both the particle-fluid interaction and the particle-particle collision are not negligible, which make it challenging to be dealt with. This work aims to numerically study and understand this problem. Applying the stratified multiphase model presented by Chang & Liou 2007 and regarding one phase as solid, following Regele et al., 2014, we study the initial stage of a planar shock impacting on a particle curtain in 2D, in which the particles can be regarded as static so that the collision between particles are not considered. The locations of reflected shock, transmitted shock, and contact discontinuity are examined. The turbulent energy generated in the interacting area is investigated. Keeping the total volume fraction of particles, and changing the particle number, good convergence results are obtained. Effective drag coefficient in 1D model is also calibrated. The authors acknowledge the support from National Natural Science Foundation of China (Grant No. 91230203).

  1. Probabilistic distributions of M/L values for ultrafaint dwarf spheroidal galaxies: stochastic samplings of the initial mass function

    NASA Astrophysics Data System (ADS)

    Hernandez, X.

    2012-02-01

    We explore the ranges and distributions which will result for the intrinsic stellar mass-to-light ratio (M/L) values of single stellar populations, at fixed initial mass function (IMF), age and metallicity, from the discrete stochastic sampling of a probabilistic IMF. As the total mass of a certain stellar population tends to infinity, the corresponding M/L values quickly converge to fixed numbers associated with the particulars of the IMF, age, metallicity and star formation histories in question. When going to small stellar populations, however, a natural inherent spread will appear for the M/L values, which will become probabilistic quantities. For the recently discovered ultrafaint local dwarf spheroidal galaxies, with total luminosities dropping below 103LV/L⊙, it is important to asses the amplitude of the probabilistic spread in inherent M/L values mentioned above. The total baryonic masses of these systems are usually estimated from their observed luminosities, and the assumption of a fixed, deterministic M/L value, suitable for the infinite population limit of the assumed ages and metallicities of the stellar populations in question. This total baryonic masses are crucial for testing and calibrating of structure formation scenarios, as the local ultrafaint dwarf spheroidals represent the most extreme galactic scales known. Also, subject to reliable M/L values is the use of these systems as possible discriminants between dark matter and modified gravity theories. By simulating large collections of stellar populations, each consisting of a particular collection of individual stars, we compute statistical distributions for the resulting M/L values. We find that for total numbers of stars in the range of what is observed for the local ultrafaint dwarf spheroidals, the inherent M/L values of stellar populations can be expected to vary by factors of upwards of 3, interestingly, systematically skewed towards higher values than what is obtained for the

  2. Timing of vasopressor initiation and mortality in septic shock: a cohort study

    PubMed Central

    2014-01-01

    Introduction Despite recent advances in the management of septic shock, mortality remains unacceptably high. Earlier initiation of key therapies including appropriate antimicrobials and fluid resuscitation appears to reduce the mortality in this condition. This study examined whether early initiation of vasopressor therapy is associated with improved survival in fluid therapy-refractory septic shock. Methods Utilizing a well-established database, relevant information including duration of time to vasopressor administration following the initial documentation of recurrent/persistent hypotension associated with septic shock was assessed in 8,670 adult patients from 28 ICUs in Canada, the United States of America, and Saudi Arabia. The primary endpoint was survival to hospital discharge. Secondary endpoints were length of ICU and hospital stay as well as duration of ventilator support and vasopressor dependence. Analysis involved multivariate linear and logistic regression analysis. Results In total, 8,640 patients met the definition of septic shock with time of vasopressor/inotropic initiation documented. Of these, 6,514 were suitable for analysis. The overall unadjusted hospital mortality rate was 53%. Independent mortality correlates included liver failure (odds ratio (OR) 3.46, 95% confidence interval (CI), 2.67 to 4.48), metastatic cancer (OR 1.63, CI, 1.32 to 2.01), AIDS (OR 1.91, CI, 1.29 to 2.49), hematologic malignancy (OR 1.88, CI, 1.46 to 2.41), neutropenia (OR 1.78, CI, 1.27 to 2.49) and chronic hypertension (OR 0.62 CI, 0.52 to 0.73). Delay of initiation of appropriate antimicrobial therapy (OR 1.07/hr, CI, 1.06 to 1.08), age (OR 1.03/yr, CI, 1.02 to 1.03), and Acute Physiology and Chronic Health Evaluation (APACHE) II Score (OR 1.11/point, CI, 1.10 to 1.12) were also found to be significant independent correlates of mortality. After adjustment, only a weak correlation between vasopressor delay and hospital mortality was found (adjusted OR 1.02/hr, 95% CI

  3. Structure of Shocks in Burgers Turbulence with Lévy Noise Initial Data

    NASA Astrophysics Data System (ADS)

    Abramson, Joshua

    2013-08-01

    We study the structure of the shocks for the inviscid Burgers equation in dimension 1 when the initial velocity is given by Lévy noise, or equivalently when the initial potential is a two-sided Lévy process ψ 0. When ψ 0 is abrupt in the sense of Vigon or has bounded variation with lim sup| h|↓0 h -2 ψ 0( h)=∞, we prove that the set of points with zero velocity is regenerative, and that in the latter case this set is equal to the set of Lagrangian regular points, which is non-empty. When ψ 0 is abrupt we show that the shock structure is discrete. When ψ 0 is eroded we show that there are no rarefaction intervals.

  4. Mesoscale simulations of shock initiation in energetic materials characterized by three-dimensional nanotomography.

    SciTech Connect

    Long, Gregory T.; Brundage, Aaron L.; Wixom, Ryan R.; Tappan, Alexander Smith

    2009-08-01

    Three-dimensional shock simulations of energetic materials have been conducted to improve our understanding of initiation at the mesoscale. Vapor-deposited films of PETN and pressed powders of HNS were characterized with a novel three-dimensional nanotomographic technique. Detailed microstructures were constructed experimentally from a stack of serial electron micrographs obtained by successive milling and imaging in a dual-beam FIB/SEM. These microstructures were digitized and imported into a multidimensional, multimaterial Eulerian shock physics code. The simulations provided insight into the mechanisms of pore collapse in PETN and HNS samples with distinctly different three-dimensional pore morphology and distribution. This modeling effort supports investigations of microscale explosive phenomenology and elucidates mechanisms governing initiation of secondary explosives.

  5. Mesoscale Simulations of Shock Initiation in Energetic Materials Characterized by Three-Dimensional Nanotomography

    NASA Astrophysics Data System (ADS)

    Brundage, A. L.; Wixom, R. R.; Tappan, A. S.; Long, G. T.

    2009-12-01

    Three-dimensional shock simulations of energetic materials have been conducted to improve our understanding of initiation at the mesoscale. Vapor-deposited films of PETN and pressed powders of HNS were characterized with a novel three-dimensional nanotomographic technique. Detailed microstructures were constructed experimentally from a stack of serial electron micrographs obtained by successive milling and imaging in a dual-beam FIB/SEM. These microstructures were digitized and imported into a multidimensional, multimaterial Eulerian shock physics code. The simulations provided insight into the mechanisms of pore collapse in PETN and HNS samples with distinctly different three-dimensional pore morphology and distribution. This modeling effort supports investigations of microscale explosive phenomenology and elucidates mechanisms governing initiation of secondary explosives.

  6. Mesoscale simulations of shock initiation in energetic materials characterized by three-dimensional nanotomagraphy

    NASA Astrophysics Data System (ADS)

    Brundage, Aaron; Wixom, Ryan; Tappan, Alexander; Long, Gregory

    2009-06-01

    Three-dimensional reverse ballistic shock simulations of energetic materials have been conducted to improve our understanding of initiation at the mesoscale. Vapor-deposited films of PETN and pressed powders of HNS were characterized with a novel three-dimensional nanotomographic technique. Detailed microstructures were constructed experimentally from a stack of serial electron micrographs obtained by successive milling and imaging in a dual-beam FIB/SEM. These microstructures were digitized and imported into a multidimensional, multimaterial Eulerian shock physics code. The simulations provided insight into the mechanisms of pore collapse in PETN and HNS samples with distinctly different three-dimensional pore morphology and distribution. This modeling effort supports the novel design and development of microenergetic devices and elucidates mechanisms governing initiation of secondary explosives.

  7. Next generation experiments and models for shock initiation and detonation of solid explosives

    SciTech Connect

    Tarver, C M

    1999-06-01

    Current phenomenological hydrodynamic reactive flow models, such as Ignition and Growth and Johnson- Tang-Forest, when normalized to embedded gauge and laser velocimetry data, have been very successful in predicting shock initiation and detonation properties of solid explosives in most scenarios. However, since these models use reaction rates based on the compression and pressure of the reacting mixture, they can not easily model situations in which the local temperature, which controls the local reaction rate, changes differently from the local pressure. With the advent of larger, faster, parallel computers, microscopic modeling of the hot spot formation processes and Arrhenius chemical kinetic reaction rates that dominate shock initiation and detonation can now be attempted. Such a modeling effort can not be successful without nanosecond or better time resolved experimental data on these processes. The experimental and modeling approaches required to build the next generation of physically realistic reactive flow models are discussed.

  8. The pharmacokinetics of vancomycin during the initial loading dose in patients with septic shock

    PubMed Central

    Katip, Wasan; Jaruratanasirikul, Sutep; Pattharachayakul, Sutthiporn; Wongpoowarak, Wibul; Jitsurong, Arnurai; Lucksiri, Aroonrut

    2016-01-01

    Objective To characterize the pharmacokinetics (PK) of vancomycin in patients in the initial phase of septic shock. Methods Twelve patients with septic shock received an intravenous infusion of vancomycin 30 mg/kg over 2 h. The vancomycin PK study was conducted during the first 12 h of the regimen. Serum vancomycin concentration–time data were analyzed using the standard model-independent analysis and the compartment model. Results For the noncompartment analysis the mean values ± standard deviation (SD) of the estimated clearance and volume of distribution of vancomycin at steady state were 6.05±1.06 L/h and 78.73±21.78 L, respectively. For the compartmental analysis, the majority of vancomycin concentration–time profiles were best described by a two-compartment PK model. Thus, the two-compartmental first-order elimination model was used for the analysis. The mean ± SD of the total clearance (3.70±1.25 L/h) of vancomycin was higher than that obtained from patients without septic shock. In contrast, the volume of the central compartment (8.34±4.36 L) and volume of peripheral compartment (30.99±7.84 L) did not increase when compared with patients without septic shock. Conclusion The total clearance of vancomycin was increased in septic shock patients. However, the volume of the central compartment and peripheral compartment did not increase. Consequently, a loading dose of vancomycin should be considered in all patients with septic shock. PMID:27920562

  9. SHOCK INITIATION OF COMPOSITION B AND C-4 EXPLOSIVES; EXPERIMENTS AND MODELING

    SciTech Connect

    Urtiew, P A; Vandersall, K S; Tarver, C M; Garcia, F; Forbes, J W

    2006-08-18

    Shock initiation experiments on the explosives Composition B and C-4 were performed to obtain in-situ pressure gauge data for the purpose of providing the Ignition and Growth reactive flow model with proper modeling parameters. A 100 mm diameter propellant driven gas gun was utilized to initiate the explosive charges containing manganin piezoresistive pressure gauge packages embedded in the explosive sample. Experimental data provided new information on the shock velocity--particle velocity relationship for each of the investigated material in their respective pressure range. The run-distance-to-detonation points on the Pop-plot for these experiments showed agreement with previously published data, and Ignition and Growth modeling calculations resulted in a good fit to the experimental data. Identical ignition and growth reaction rate parameters were used for C-4 and Composition B, and the Composition B model also included a third reaction rate to simulate the completion of reaction by the TNT component. This model can be applied to shock initiation scenarios that have not or cannot be tested experimentally with a high level of confidence in its predictions.

  10. Shock Initiation Experiments with Ignition and Growth Modeling on Low Density HMX

    NASA Astrophysics Data System (ADS)

    Garcia, Frank; Vandersall, Kevin; Tarver, Craig

    2013-06-01

    Shock initiation experiments on low density (1.24 and 1.64 g/cm3) HMX were performed to obtain in-situ pressure gauge data, characterize the run-distance-to-detonation behavior, and provide a basis for Ignition and Growth reactive flow modeling. A 101 mm diameter gas gun was utilized to initiate the explosive charges with manganin piezoresistive pressure gauge packages placed between packed layers (1.24 g/cm3) or sample disks pressed to low density (1.64 g/cm3) . The measured shock sensitivity of the 1.24 g/cm3 HMX was similar to that previously measured by Dick and Sheffield et al. and the 1.64 g/cm3 HMX was measured to be much less shock sensitive. Ignition and Growth model parameters were derived that yielded good agreement with the experimental data at both initial densities. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  11. Examining the effects of microstructure and loading on the shock initiation of HMX with mesoscale simulations

    NASA Astrophysics Data System (ADS)

    Springer, H. Keo; Tarver, Craig; Bastea, Sorin

    2015-06-01

    We perform reactive mesoscale simulations to study shock initiation in HMX over a range of pore morphologies and sizes, porosities, and loading conditions in order to improve our understanding of structure-performance relationships. These relationships are important because they guide the development of advanced macroscale models incorporating hot spot mechanisms and the optimization of novel energetic material microstructures. Mesoscale simulations are performed using the multiphysics hydrocode, ALE3D. Spherical, elliptical, polygonal, and crack-like pore geometries 0.1, 1, 10, and 100 microns in size and 2, 5, 10, and 14% porosity are explored. Loading conditions are realized with shock pressures of 6, 10, 20, 38, and 50 GPa. A Cheetah-based tabular model, including temperature-dependent heat capacity, is used for the unreacted and the product equation-of-state. Also, in-line Cheetah is used to probe chemical species evolution. The influence of microstructure and shock loading on shock-to-detonation-transition run distance, reaction rate and product gas species evolution are discussed. This work performed under the auspices of the U.S. DOE by LLNL under Contract DE-AC52-07NA27344. This work is funded by the Joint DoD-DOE Munitions Program.

  12. Large-Scale Reactive Atomistic Simulation of Shock-induced Initiation Processes in Energetic Materials

    NASA Astrophysics Data System (ADS)

    Thompson, Aidan

    2013-06-01

    Initiation in energetic materials is fundamentally dependent on the interaction between a host of complex chemical and mechanical processes, occurring on scales ranging from intramolecular vibrations through molecular crystal plasticity up to hydrodynamic phenomena at the mesoscale. A variety of methods (e.g. quantum electronic structure methods (QM), non-reactive classical molecular dynamics (MD), mesoscopic continuum mechanics) exist to study processes occurring on each of these scales in isolation, but cannot describe how these processes interact with each other. In contrast, the ReaxFF reactive force field, implemented in the LAMMPS parallel MD code, allows us to routinely perform multimillion-atom reactive MD simulations of shock-induced initiation in a variety of energetic materials. This is done either by explicitly driving a shock-wave through the structure (NEMD) or by imposing thermodynamic constraints on the collective dynamics of the simulation cell e.g. using the Multiscale Shock Technique (MSST). These MD simulations allow us to directly observe how energy is transferred from the shockwave into other processes, including intramolecular vibrational modes, plastic deformation of the crystal, and hydrodynamic jetting at interfaces. These processes in turn cause thermal excitation of chemical bonds leading to initial chemical reactions, and ultimately to exothermic formation of product species. Results will be presented on the application of this approach to several important energetic materials, including pentaerythritol tetranitrate (PETN) and ammonium nitrate/fuel oil (ANFO). In both cases, we validate the ReaxFF parameterizations against QM and experimental data. For PETN, we observe initiation occurring via different chemical pathways, depending on the shock direction. For PETN containing spherical voids, we observe enhanced sensitivity due to jetting, void collapse, and hotspot formation, with sensitivity increasing with void size. For ANFO, we

  13. Predictability and prediction of Indian summer monsoon by CFSv2: implication of the initial shock effect

    NASA Astrophysics Data System (ADS)

    Shukla, Ravi P.; Huang, Bohua; Marx, L.; Kinter, James L.; Shin, Chul-Su

    2017-03-01

    This study evaluates the seasonal predictability of the Indian summer monsoon (ISM) rainfall using the Climate Forecast System, version 2 (CFSv2), the current operational forecast model for subseasonal-to-seasonal predictions at the National Centers for Environmental Prediction (NCEP). From a 50-year CFSv2 simulation, 21 wet, dry and normal ISM cases are chosen for a set of seasonal "predictions" with initial states in each month from January to May to conduct predictability experiments. For each prediction, a five-member ensemble is generated with perturbed atmospheric initial states and all predictions are integrated to the end of September. Based on the measures of correlation and root mean square error, the prediction skill decreases with lead month, with the initial states with the shortest lead (May initial states) generally showing the highest skill for predicting the summer mean (June to September; JJAS) rainfall, zonal wind at 850 hPa and sea surface temperature over the ISM region in the perfect model scenario. These predictability experiments are used to understand the finding reported by some recent studies that the NCEP CFSv2 seasonal retrospective forecasts generally have higher skill in predicting the ISM rainfall anomalies from February initial states than from May ones. Comparing the May climatologies generated by the February and May initialized CFSv2 retrospective forecasts, it is found that the latter shows larger bias over the Arabian Sea, with stronger monsoon winds, precipitation and surface latent heat flux. Although the atmospheric bias diminishes quickly after May, an accompanying cold bias persists in the Arabian Sea for several months. It is argued that a similar phenomenon does not occur in the predictability experiments in the perfect model scenario, because the initial shock is negligible in these experiments by design. Therefore, it is possible that the stronger model bias and initial shock in the May CFSv2 retrospective forecasts

  14. Investigating short-pulse shock initiation in HMX-based explosives with reactive meso-scale simulations

    NASA Astrophysics Data System (ADS)

    Springer, H. K.; Tarver, C. M.; Reaugh, J. E.; May, C. M.

    2014-05-01

    We performed reactive meso-scale simulations of short-pulse experiments to study the influence of flyer velocity and pore structure on shock initiation of LX-10 (95wt% HMX, 5wt% Viton A). Our calculations show that the reaction evolution fit a power law relationship in time and increases with increasing porosity, decreasing pore size, and increasing flyer velocity. While heterogeneous shock initiation modes, dependent on hot spot mechanisms, are predicted at lower flyer velocities, mixed heterogeneous-homogeneous shock initiation modes, less dependent on hot spots, are predicted at higher velocities. These studies are important because they enable the development of predictive shock initiation models that incorporate complex microstructure and can be used to optimize performance-safety characteristics of explosives.

  15. Computational Study of 3-D Hot-Spot Initiation in Shocked Insensitive High-Explosive

    NASA Astrophysics Data System (ADS)

    Najjar, F. M.; Howard, W. M.; Fried, L. E.

    2011-06-01

    High explosive shock sensitivity is controlled by a combination of mechanical response, thermal properties, and chemical properties. The interplay of these physical phenomena in realistic condensed energetic materials is currently lacking. A multiscale computational framework is developed investigating hot spot (void) ignition in a single crystal of an insensitive HE, TATB. Atomistic MD simulations are performed to provide the key chemical reactions and these reaction rates are used in 3-D multiphysics simulations. The multiphysics code, ALE3D, is linked to the chemistry software, Cheetah, and a three-way coupled approach is pursued including hydrodynamics, thermal and chemical analyses. A single spherical air bubble is embedded in the insensitive HE and its collapse due to shock initiation is evolved numerically in time; while the ignition processes due chemical reactions are studied. Our current predictions showcase several interesting features regarding hot spot dynamics including the formation of a ``secondary'' jet. Results obtained with hydro-thermo-chemical processes leading to ignition growth will be discussed for various pore sizes and different shock pressures. LLNL-ABS-471438. This work performed under the auspices of the U.S. Department of Energy by LLNL under Contract DE-AC52-07NA27344.

  16. Caspase-8 inhibition represses initial human monocyte activation in septic shock model

    PubMed Central

    Oliva-Martin, Maria Jose; Sanchez-Abarca, Luis Ignacio; Rodhe, Johanna; Carrillo-Jimenez, Alejandro; Vlachos, Pinelopi; Herrera, Antonio Jose; Garcia-Quintanilla, Albert; Caballero-Velazquez, Teresa; Perez-Simon, Jose Antonio; Joseph, Bertrand; Venero, Jose Luis

    2016-01-01

    In septic patients, the onset of septic shock occurs due to the over-activation of monocytes. We tested the therapeutic potential of directly targeting innate immune cell activation to limit the cytokine storm and downstream phases. We initially investigated whether caspase-8 could be an appropriate target given it has recently been shown to be involved in microglial activation. We found that LPS caused a mild increase in caspase-8 activity and that the caspase-8 inhibitor IETD-fmk partially decreased monocyte activation. Furthermore, caspase-8 inhibition induced necroptotic cell death of activated monocytes. Despite inducing necroptosis, caspase-8 inhibition reduced LPS-induced expression and release of IL-1β and IL-10. Thus, blocking monocyte activation has positive effects on both the pro and anti-inflammatory phases of septic shock. We also found that in primary mouse monocytes, caspase-8 inhibition did not reduce LPS-induced activation or induce necroptosis. On the other hand, broad caspase inhibitors, which have already been shown to improve survival in mouse models of sepsis, achieved both. Thus, given that monocyte activation can be regulated in humans via the inhibition of a single caspase, we propose that the therapeutic use of caspase-8 inhibitors could represent a more selective alternative that blocks both phases of septic shock at the source. PMID:27250033

  17. Manganin Gauge and Reactive Flow Modeling Study of the Shock Initiation of PBX 9501

    NASA Astrophysics Data System (ADS)

    Tarver, C. M.; Forbes, J. W.; Garcia, F.; Urtiew, P. A.

    2002-07-01

    A series of 101mm diameter gas gun experiments was fired using manganin pressure gauges embedded in the HMX-based explosive PBX 9501 at initial temperatures of 20degC and 50degC. Flyer plate impact velocities were chosen to produce impact pressure levels in PBX 9501 at which the growth of explosive reaction preceding detonation was measured on most of the gauges and detonation pressure profiles were recorded on some of the gauges placed deepest into the explosive targets. All measured pressure histories for initial temperatures of 25degC and 50degC were essentially identical. Measured run distances to detonation at three input shock pressures agreed with previous results. An existing Ignition and Growth reactive flow computer model for shock initiation and detonation of PBX 9501, which was developed based on LANL embedded particle velocity gauge data, was tested on these pressure gauge results. The agreement was excellent, indicating that the embedded pressure and particle velocity gauge techniques yielded consistent results.

  18. Grain-Scale Simulations of Hot-Spot Initiation for Shocked TATB

    SciTech Connect

    Najjar, F; Howard, W; Fried, L

    2009-07-31

    High-explosive (HE) material consists of large-sized grains with micron-sized embedded impurities and pores. Under various mechanical/thermal insults, these pores collapse generating high-temperature regions leading to ignition. A computational study has been performed to investigate the mechanisms of pore collapse and hot spot initiation in TATB crystals, employing the thermohydrodynamics arbitrary-Lagrange-Eulerian code ALE3D. This initial study includes non-reactive dynamics to isolate the thermal and hydrodynamical effects. Two-dimensional high-resolution large-scale meso-scale simulations have been undertaken. We study an axisymmetric configuration for pore radii ranging from 0.5 to 2{micro}m, with initial shock pressures in the range from 3 to 11 GPa. A Mie-Gruneisen Equation of State (EOS) model is used for TATB, and includes a constant yield strength and shear modulus; while the air in the pore invokes a Livermore Equation of State (LEOS) model. The parameter space is systematically studied by considering various shock strengths, pore diameters and material properties. We find that thermal diffusion from the collapsed pores has an important effect in generating high-temperature hot spots in the TATB.

  19. Phenylephrine versus norepinephrine for initial hemodynamic support of patients with septic shock: a randomized, controlled trial

    PubMed Central

    Morelli, Andrea; Ertmer, Christian; Rehberg, Sebastian; Lange, Matthias; Orecchioni, Alessandra; Laderchi, Amalia; Bachetoni, Alessandra; D'Alessandro, Mariadomenica; Van Aken, Hugo; Pietropaoli, Paolo; Westphal, Martin

    2008-01-01

    Introduction Previous findings suggest that a delayed administration of phenylephrine replacing norepinephrine in septic shock patients causes a more pronounced hepatosplanchnic vasoconstriction as compared with norepinephrine. Nevertheless, a direct comparison between the two study drugs has not yet been performed. The aim of the present study was, therefore, to investigate the effects of a first-line therapy with either phenylephrine or norepinephrine on systemic and regional hemodynamics in patients with septic shock. Methods We performed a prospective, randomized, controlled trial in a multidisciplinary intensive care unit in a university hospital. We enrolled septic shock patients (n = 32) with a mean arterial pressure below 65 mmHg despite adequate volume resuscitation. Patients were randomly allocated to treatment with either norepinephrine or phenylephrine infusion (n = 16 each) titrated to achieve a mean arterial pressure between 65 and 75 mmHg. Data from right heart catheterization, a thermodye dilution catheter, gastric tonometry, acid-base homeostasis, as well as creatinine clearance and cardiac troponin were obtained at baseline and after 12 hours. Differences within and between groups were analyzed using a two-way analysis of variance for repeated measurements with group and time as factors. Time-independent variables were compared with one-way analysis of variance. Results No differences were found in any of the investigated parameters. Conclusions The present study suggests there are no differences in terms of cardiopulmonary performance, global oxygen transport, and regional hemodynamics when phenylephrine was administered instead of norepinephrine in the initial hemodynamic support of septic shock. Trial registration ClinicalTrial.gov NCT00639015 PMID:19017409

  20. Modeling Three-Dimensional Shock Initiation of PBX 9501 in ALE3D

    SciTech Connect

    Leininger, L; Springer, H K; Mace, J; Mas, E

    2008-07-08

    A recent SMIS (Specific Munitions Impact Scenario) experimental series performed at Los Alamos National Laboratory has provided 3-dimensional shock initiation behavior of the HMX-based heterogeneous high explosive, PBX 9501. A series of finite element impact calculations have been performed in the ALE3D [1] hydrodynamic code and compared to the SMIS results to validate and study code predictions. These SMIS tests used a powder gun to shoot scaled NATO standard fragments into a cylinder of PBX 9501, which has a PMMA case and a steel impact cover. This SMIS real-world shot scenario creates a unique test-bed because (1) SMIS tests facilitate the investigation of 3D Shock to Detonation Transition (SDT) within the context of a considerable suite of diagnostics, and (2) many of the fragments arrive at the impact plate off-center and at an angle of impact. A particular goal of these model validation experiments is to demonstrate the predictive capability of the ALE3D implementation of the Tarver-Lee Ignition and Growth reactive flow model [2] within a fully 3-dimensional regime of SDT. The 3-dimensional Arbitrary Lagrange Eulerian (ALE) hydrodynamic model in ALE3D applies the Ignition and Growth (I&G) reactive flow model with PBX 9501 parameters derived from historical 1-dimensional experimental data. The model includes the off-center and angle of impact variations seen in the experiments. Qualitatively, the ALE3D I&G calculations reproduce observed 'Go/No-Go' 3D Shock to Detonation Transition (SDT) reaction in the explosive, as well as the case expansion recorded by a high-speed optical camera. Quantitatively, the calculations show good agreement with the shock time of arrival at internal and external diagnostic pins. This exercise demonstrates the utility of the Ignition and Growth model applied for the response of heterogeneous high explosives in the SDT regime.

  1. Modeling The Shock Initiation of PBX-9501 in ALE3D

    SciTech Connect

    Leininger, L; Springer, H K; Mace, J; Mas, E

    2008-07-01

    The SMIS (Specific Munitions Impact Scenario) experimental series performed at Los Alamos National Laboratory has determined the 3-dimensional shock initiation behavior of the HMX-based heterogeneous high explosive, PBX 9501. A series of finite element impact calculations have been performed in the ALE3D [1] hydrodynamic code and compared to the SMIS results to validate the code predictions. The SMIS tests use a powder gun to shoot scaled NATO standard fragments at a cylinder of PBX 9501, which has a PMMA case and a steel impact cover. The SMIS real-world shot scenario creates a unique test-bed because many of the fragments arrive at the impact plate off-center and at an angle of impact. The goal of this model validation experiments is to demonstrate the predictive capability of the Tarver-Lee Ignition and Growth (I&G) reactive flow model [2] in this fully 3-dimensional regime of Shock to Detonation Transition (SDT). The 3-dimensional Arbitrary Lagrange Eulerian hydrodynamic model in ALE3D applies the Ignition and Growth (I&G) reactive flow model with PBX 9501 parameters derived from historical 1-dimensional experimental data. The model includes the off-center and angle of impact variations seen in the experiments. Qualitatively, the ALE3D I&G calculations accurately reproduce the 'Go/No-Go' threshold of the Shock to Detonation Transition (SDT) reaction in the explosive, as well as the case expansion recorded by a high-speed optical camera. Quantitatively, the calculations show good agreement with the shock time of arrival at internal and external diagnostic pins. This exercise demonstrates the utility of the Ignition and Growth model applied in a predictive fashion for the response of heterogeneous high explosives in the SDT regime.

  2. Shock initiation of the TATB-based explosive PBX-9502 heated to ˜ 76°C

    NASA Astrophysics Data System (ADS)

    Gustavsen, R. L.; Gehr, R. J.; Bucholtz, S. M.; Pacheco, A. H.; Bartram, B. D.

    2017-01-01

    We present gas-gun driven plate impact shock initiation experiments on the explosive PBX 9502 (95 weight percent triaminotrinitrobenzene, 5 weight percent Kel-F 800 binder) heated to ˜ 76°C. PBX 9502 samples were heated by flowing hot air through a sample mounting plate and surrounding coil. Temperatures were monitored using embedded and surface mounted type-E thermocouples. The shock to detonation transition was recorded using embedded electromagnetic particle velocity gauges. Results show increased shock sensitivity; time and distance to detonation onset vs. initial shock pressure are shorter than when the sample is initially at ambient temperature. Our results are consistent with those reported by Dallman and Wackerle: the "Pop-plot," or distance to detonation, xD, vs. impact pressure, P, is log10(xD) = 3.41 - 2.47 log10(P).

  3. A mesoscopic reaction rate model for shock initiation of multi-component PBX explosives.

    PubMed

    Liu, Y R; Duan, Z P; Zhang, Z Y; Ou, Z C; Huang, F L

    2016-11-05

    The primary goal of this research is to develop a three-term mesoscopic reaction rate model that consists of a hot-spot ignition, a low-pressure slow burning and a high-pressure fast reaction terms for shock initiation of multi-component Plastic Bonded Explosives (PBX). Thereinto, based on the DZK hot-spot model for a single-component PBX explosive, the hot-spot ignition term as well as its reaction rate is obtained through a "mixing rule" of the explosive components; new expressions for both the low-pressure slow burning term and the high-pressure fast reaction term are also obtained by establishing the relationships between the reaction rate of the multi-component PBX explosive and that of its explosive components, based on the low-pressure slow burning term and the high-pressure fast reaction term of a mesoscopic reaction rate model. Furthermore, for verification, the new reaction rate model is incorporated into the DYNA2D code to simulate numerically the shock initiation process of the PBXC03 and the PBXC10 multi-component PBX explosives, and the numerical results of the pressure histories at different Lagrange locations in explosive are found to be in good agreements with previous experimental data.

  4. SHOCK INITIATION EXPERIMENTS ON PBX9501 EXPLOSIVE AT 150?C FOR IGNITION AND GROWTH MODELING

    SciTech Connect

    Vandersall, K S; Tarver, C M; Garcia, F; Urtiew, P A

    2005-07-19

    Shock initiation experiments on the explosive PBX9501 (95% HMX, 2.5% estane, and 2.5% nitroplasticizer by weight) were performed at 150 C to obtain in-situ pressure gauge data and Ignition and Growth modeling parameters. A 101 mm diameter propellant driven gas gun was utilized to initiate the PBX9501 explosive with manganin piezoresistive pressure gauge packages placed between sample slices. The run-distance-to-detonation points on the Pop-plot for these experiments showed agreement with previously published data and Ignition and Growth modeling parameters were obtained with a good fit to the experimental data. This parameter set will allow accurate code predictions to be calculated for safety scenarios involving PBX9501 explosives at temperatures close to 150 C.

  5. Study of factors which influence the shock-initiation sensitivity of hexanitrostilbene (HNS)

    SciTech Connect

    Schwarz, A. C.

    1981-03-01

    An experimental program was conducted to study factors which influence the shock initiation sensitivity of hexanitrostilbene (HNS). The six factors evaluated were: (1) powder morphology, (2) sample density, (3) test temperature, (4) sample length, (5) diameter of the impacting flyer, and (6) duration of the input stimulus. In addition, the effect of pressure duration, tau, was assessed on the initiation sensitivity of an extrudable explosive (LX-13) and of hexanitroazobenzene (HNAB) for comparison with that of superfine hexanitrostilbene (HNS-SF). The impact stimulus was provided by a polyimide flyer 1.57 mm in diameter propelled by an electrically excited bursting foil. Flyer velocity determined impact pressure, P (3 to 20 GPa), and flyer thickness the shock duration, tau (0.010 to 0.150 ..mu..s), the pulse shape being rectangular. Powder morphology was the most significant factor to influence the initiation sensitivity of HNS; with 0.035-..mu..s pulses the smallest particle-sized HNS had a threshold pressure for initiation which was 50% of that required for the coarser HNS-II. Other factors which lowered the threshold pressure were: lower sample density, elevated test temperature, and larger diameter flyers. HNS-SF showed a shorter growth-to-detonation distance (GTDD) than HNS-I; the GTDD was 0.56 mm at an impact pressure of 7.3 GPa. Pulse duration affected the threshold pressure with each explosive behaving in its own characteristic manner; a P-tau characterization is essential, therefore, for all explosives of interest and should include values of tau which are equivalent to pulse durations expected in service.

  6. Shock.

    PubMed

    Wacker, David A; Winters, Michael E

    2014-11-01

    Critically ill patients with undifferentiated shock are complex and challenging cases in the ED. A systematic approach to assessment and management is essential to prevent unnecessary morbidity and mortality. The simplified, systematic approach described in this article focuses on determining the presence of problems with cardiac function (the pump), intravascular volume (the tank), or systemic vascular resistance (the pipes). With this approach, the emergency physician can detect life-threatening conditions and implement time-sensitive therapy.

  7. Study of void sizes and loading configurations effects on shock initiation due to void collapse in heterogeneous energetic materials

    NASA Astrophysics Data System (ADS)

    Roy, Sidhartha; Rai, Nirmal; Udaykumar, H. S.

    2015-06-01

    In heterogeneous energetic materials, presence of porosity has been seen to increase its sensitivity towards shock initiation and ignition. Under the application of shock load, the viscoplastic deformation of voids and its collapse leads to the formation of local high temperature regions known as hot spots. The chemical reaction triggers at the hot spot depending on the local temperature and grows eventually leading to ignition and formation of detonation waves in the material. The temperature of the hot spot depends on various factors such as shock strength, void size, void arrangements, loading configuration etc. Hence, to gain deeper understanding on shock initiation and ignition study due to void collapse, a parametric study involving various factors which can affect the hot spot temperature is desired. In the current work, effects of void sizes, shock strength and loading configurations has been studied for shock initiation in HMX using massively parallel Eulerian code, SCIMITAR3D. The chemical reaction and decomposition for HMX has been modeled using Henson-Smilowitz multi step mechanism. The effect of heat conduction has also been taken into consideration. Ignition threshold criterion has been established for various factors as mentioned. The critical hot spot temperature and its size which can lead to ignition has been obtained from numerical experiments.

  8. Shock initiation studies of low density HMX using electromagnetic particle velocity and PVDF stress gauges

    SciTech Connect

    Sheffield, S.A.; Gustavsen, R.L.; Alcon, R.R.; Graham, R.A.; Anderson, M.U.

    1993-09-01

    Magnetic particle velocity and PVDF stress rate gauges have been used to measure the shock response of low density octotetramethylene tetranitramine (HMX) (1.24 &/cm{sup 3}). In experiments done at LANL, magnetic particle velocity gauges were located on both sides of the explosive. In nearly identical experiments done at SNL, PVDF stress rate gauges were located at the same positions so both particle velocity and stress histories were obtained for a particular experimental condition. Unreacted Hugoniot data were obtained and an EOS was developed by combining methods used by Hayes, Sheffield and Mitchell (for describing the Hugoniot of HNS at various densities) with Hermann`s P-{alpha} model. Using this technique, it is only necessary to know some thermodynamic constants or the Hugoniot of the initially solid material and the porous material sound speed to obtain accurate unreacted Hugoniots for the porous explosive. Loading and reaction paths were established in the stress-particle velocity plane for some experimental conditions. This information was used to determine a global reaction rate of {approx} 0.13 {mu}s{sup {minus}1} for porous HMX shocked to 0.8 GPa. At low input stresses the transmitted wave profiles had long rise times (up to 1 {mu}s) due to the compaction processes.

  9. Shock initiation of nano-Al/Teflon: High dynamic range pyrometry measurements

    NASA Astrophysics Data System (ADS)

    Wang, Jue; Bassett, Will P.; Dlott, Dana D.

    2017-02-01

    Laser-launched flyer plates (25 μm thick Cu) were used to impact-initiate reactive materials consisting of 40 nm Al particles embedded in TeflonAF polymer (Al/Teflon) on sapphire substrates at a stoichiometric concentration (2.3:1 Teflon:Al), as well as one-half and one-fourth that concentration. A high dynamic range emission spectrometer was used to time and spectrally resolve the emitted light and to determine graybody temperature histories with nanosecond time resolution. At 0.5 km s-1, first light emission was observed from Teflon, but at 0.6 km s-1, the emission from Al/Teflon became much more intense, so we assigned the impact threshold for Al/Teflon reactions to be 0.6 (±0.1) km s-1. The flyer plates produced a 7 ns duration steady shock drive. Emission from shocked Al/Teflon above threshold consisted of two bursts. At the higher impact velocities, the first burst started 15 ns after impact, peaked at 25 ns, and persisted for 75 ns. The second burst started at a few hundred nanoseconds and lasted until 2 μs. The 15 ns start time was exactly the time the flyer plate velocity dropped to zero after impact with sapphire. The first burst was associated with shock-triggered reactions and the second, occurring at ambient pressure, was associated with combustion of leftover material that did not react during shock. The emission spectrum was found to be a good fit to a graybody at all times, allowing temperature histories to be extracted. At 25 ns, the temperature at 0.7 km s-1 and the one-fourth Al load was 3800 K. Those temperatures increased significantly with impact velocity, up to 4600 K, but did not increase as much with Al load. A steady combustion process at 2800 (±100) K was observed in the microsecond range. The minimal dependence on Al loading indicates that these peak temperatures arise primarily from Al nanoparticles reacting almost independently, since the presence of nearby heat sources had little influence on the peak temperatures.

  10. Computational study of 3-D hot-spot initiation in shocked insensitive high-explosive

    NASA Astrophysics Data System (ADS)

    Najjar, F. M.; Howard, W. M.; Fried, L. E.; Manaa, M. R.; Nichols, A., III; Levesque, G.

    2012-03-01

    High-explosive (HE) material consists of large-sized grains with micron-sized embedded impurities and pores. Under various mechanical/thermal insults, these pores collapse generating hightemperature regions leading to ignition. A hydrodynamic study has been performed to investigate the mechanisms of pore collapse and hot spot initiation in TATB crystals, employing a multiphysics code, ALE3D, coupled to the chemistry module, Cheetah. This computational study includes reactive dynamics. Two-dimensional high-resolution large-scale meso-scale simulations have been performed. The parameter space is systematically studied by considering various shock strengths, pore diameters and multiple pore configurations. Preliminary 3-D simulations are undertaken to quantify the 3-D dynamics.

  11. Laser-driven miniature flyer plates for shock initiation of secondary explosives

    SciTech Connect

    Paisley, D.L.

    1989-01-01

    Miniature flyer plates (<1-mm diameter X <5-micron thick) of aluminum and other materials are accelerated by a 10-ns pulsed Nd:YAG laser to velocities >5 km/s. Velocity profiles are recorded by velocity interferometry (VISAR) techniques and impact planarity by electronic streak photography. Techniques for improving energy coupling from laser to flyer plate will be discussed. Flyer plate performance parameters will be compared with material properties. The P/sup n/t criteria for shock initiation of explosives will be compared for various flyer materials, pressure, and pulse duration. Performance of secondary explosives (PETN, HNS, HMX, various PBX, others) will be reported. These data will detail the experimental effect of t (in P/sup n/t) approaching values of a few nanoseconds. 9 refs., 5 figs.

  12. Laser-driven miniature flyer plates for shock initiation of secondary explosives

    NASA Astrophysics Data System (ADS)

    Paisley, D. L.

    1989-08-01

    Miniature flyer plates (greater than 1-mm diameter X greater than 5-micron thick) of aluminum and other materials are accelerated by a 10-ns pulsed Nd:YAG laser to velocities less than 5 km/s. Velocity profiles are recorded by velocity interferometry (VISAR) techniques and impact planarity by electronic streak photography. Techniques for improving energy coupling from laser to flyer plate will be discussed. Flyer plate performance parameters will be compared with material properties. The P(sup n)t criteria for shock initiation of explosives will be compared for various flyer materials, pressure, and pulse duration. Performance of secondary explosives (PETN, HNS, HMX, various PBX, others) will be reported. These data will detail the experimental effect of t (in P(sup n)t) approaching values of a few nanoseconds.

  13. Computational prediction of probabilistic ignition threshold of pressed granular Octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) under shock loading

    NASA Astrophysics Data System (ADS)

    Kim, Seokpum; Miller, Christopher; Horie, Yasuyuki; Molek, Christopher; Welle, Eric; Zhou, Min

    2016-09-01

    The probabilistic ignition thresholds of pressed granular Octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine explosives with average grain sizes between 70 μm and 220 μm are computationally predicted. The prediction uses material microstructure and basic constituent properties and does not involve curve fitting with respect to or prior knowledge of the attributes being predicted. The specific thresholds predicted are James-type relations between the energy flux and energy fluence for given probabilities of ignition. Statistically similar microstructure sample sets are computationally generated and used based on the features of micrographs of materials used in actual experiments. The predicted thresholds are in general agreement with measurements from shock experiments in terms of trends. In particular, it is found that grain size significantly affects the ignition sensitivity of the materials, with smaller sizes leading to lower energy thresholds required for ignition. For example, 50% ignition threshold of the material with an average grain size of 220 μm is approximately 1.4-1.6 times that of the material with an average grain size of 70 μm in terms of energy fluence. The simulations account for the controlled loading of thin-flyer shock experiments with flyer velocities between 1.5 and 4.0 km/s, constituent elasto-viscoplasticity, fracture, post-fracture contact and friction along interfaces, bulk inelastic heating, interfacial frictional heating, and heat conduction. The constitutive behavior of the materials is described using a finite deformation elasto-viscoplastic formulation and the Birch-Murnaghan equation of state. The ignition thresholds are determined via an explicit analysis of the size and temperature states of hotspots in the materials and a hotspot-based ignition criterion. The overall ignition threshold analysis and the microstructure-level hotspot analysis also lead to the definition of a macroscopic ignition parameter (J) and a microscopic

  14. Influence of a CME’s Initial Parameters on the Arrival of the Associated Interplanetary Shock at Earth and the Shock Propagational Model Version 3

    NASA Astrophysics Data System (ADS)

    Zhao, X. H.; Feng, X. S.

    2015-08-01

    Predicting the arrival times of coronal mass ejections (CMEs) and their related waves at Earth is an important aspect of space weather forecasting. The Shock Propagation Model (SPM) and its updated version (SPM2), which use the initial parameters of solar flare-Type II burst events as input, have been developed to predict the shock arrival time. This paper continues to investigate the influence of solar disturbances and their associated CMEs on the corresponding interplanetary (IP) shock’s arrival at Earth. It has been found that IP shocks associated with wider CMEs have a greater probability of reaching the Earth, and the CME speed obtained from coronagraph observations can be supplementary to the initial shock speed computed from Type II radio bursts when predicting the shock’s arrival time. Therefore, the third version of the model, i.e., SPM3, has been developed based on these findings. The new version combines the characteristics of solar flare-Type II events with the initial parameters of the accompanying CMEs to provide the prediction of the associated IP shock’s arrival at Earth. The prediction test for 498 events of Solar Cycle 23 reveals that the prediction success rate of SPM3 is 70%-71%, which is apparently higher than that of the previous SPM2 model (61%-63%). The transit time prediction error of SPM3 for the Earth-encountered shocks is within 9 hr (mean-absolute). Comparisons between SPM3 and other similar models also demonstrate that SPM3 has the highest success rate and best prediction performance.

  15. Probabilistic risk analysis and fault trees: Initial discussion of application to identification of risk at a wellhead

    NASA Astrophysics Data System (ADS)

    Rodak, C.; Silliman, S.

    2012-02-01

    Wellhead protection is of critical importance for managing groundwater resources. While a number of previous authors have addressed questions related to uncertainties in advective capture zones, methods for addressing wellhead protection in the presence of uncertainty in the chemistry of groundwater contaminants, the relationship between land-use and contaminant sources, and the impact on health of the receiving population are limited. It is herein suggested that probabilistic risk analysis (PRA) combined with fault trees (FT) provides a structure whereby chemical transport can be combined with uncertainties in source, chemistry, and health impact to assess the probability of negative health outcomes in the population. As such, PRA-FT provides a new strategy for the identification of areas of probabilistically high human health risk. Application of this approach is demonstrated through a simplified case study involving flow to a well in an unconfined aquifer with heterogeneity in aquifer properties and contaminant sources.

  16. Duration of hemodynamic effects of crystalloids in patients with circulatory shock after initial resuscitation

    PubMed Central

    2014-01-01

    Background In the later stages of circulatory shock, monitoring should help to avoid fluid overload. In this setting, volume expansion is ideally indicated only for patients in whom the cardiac index (CI) is expected to increase. Crystalloids are usually the choice for fluid replacement. As previous studies evaluating the hemodynamic effect of crystalloids have not distinguished responders from non-responders, the present study was designed to evaluate the duration of the hemodynamic effects of crystalloids according to the fluid responsiveness status. Methods This is a prospective observational study conducted after the initial resuscitation phase of circulatory shock (>6 h vasopressor use). Critically ill, sedated adult patients monitored with a pulmonary artery catheter who received a fluid challenge with crystalloids (500 mL infused over 30 min) were included. Hemodynamic variables were measured at baseline (T0) and at 30 min (T1), 60 min (T2), and 90 min (T3) after a fluid bolus, totaling 90 min of observation. The patients were analyzed according to their fluid responsiveness status (responders with CI increase >15% and non-responders ≤15% at T1). The data were analyzed by repeated measures of analysis of variance. Results Twenty patients were included, 14 of whom had septic shock. Overall, volume expansion significantly increased the CI: 3.03 ± 0.64 L/min/m2 to 3.58 ± 0.66 L/min/m2 (p < 0.05). From this period, there was a progressive decrease: 3.23 ± 0.65 L/min/m2 (p < 0.05, T2 versus T1) and 3.12 ± 0.64 L/min/m2 (p < 0.05, period T3 versus T1). Similar behavior was observed in responders (13 patients), 2.84 ± 0.61 L/min/m2 to 3.57 ± 0.65 L/min/m2 (p < 0.05) with volume expansion, followed by a decrease, 3.19 ± 0.69 L/min/m2 (p < 0.05, T2 versus T1) and 3.06 ± 0.70 L/min/m2 (p < 0.05, T3 versus T1). Blood pressure and cardiac filling pressures also decreased significantly after

  17. A simple probabilistic model of initiation of motion of poorly-sorted granular mixtures subjected to a turbulent flow

    NASA Astrophysics Data System (ADS)

    Ferreira, Rui M. L.; Ferrer-Boix, Carles; Hassan, Marwan

    2015-04-01

    Initiation of sediment motion is a classic problem of sediment and fluid mechanics that has been studied at wide range of scales. By analysis at channel scale one means the investigation of a reach of a stream, sufficiently large to encompass a large number of sediment grains but sufficiently small not to experience important variations in key hydrodynamic variables. At this scale, and for poorly-sorted hydraulically rough granular beds, existing studies show a wide variation of the value of the critical Shields parameter. Such uncertainty constitutes a problem for engineering studies. To go beyond Shields paradigm for the study of incipient motion at channel scale this problem can be can be cast in probabilistic terms. An empirical probability of entrainment, which will naturally account for size-selective transport, can be calculated at the scale of the bed reach, using a) the probability density functions (PDFs) of the flow velocities {{f}u}(u|{{x}n}) over the bed reach, where u is the flow velocity and xn is the location, b) the PDF of the variability of competent velocities for the entrainment of individual particles, {{f}{{up}}}({{u}p}), where up is the competent velocity, and c) the concept of joint probability of entrainment and grain size. One must first divide the mixture in into several classes M and assign a correspondent frequency p_M. For each class, a conditional PDF of the competent velocity {{f}{{up}}}({{u}p}|M) is obtained, from the PDFs of the parameters that intervene in the model for the entrainment of a single particle: [ {{u}p}/√{g(s-1){{di}}}={{Φ }u}( { {{C}k} },{{{φ}k}},ψ,{{u}p/{di}}{{{ν}(w)}} )) ] where { Ck } is a set of shape parameters that characterize the non-sphericity of the grain, { φk} is a set of angles that describe the orientation of particle axes and its positioning relatively to its neighbours, ψ is the skin friction angle of the particles, {{{u}p}{{d}i}}/{{{ν}(w)}} is a particle Reynolds number, di is the sieving

  18. Modification of amino acids at shock pressures of 3 to 30 GPA: Initial results

    NASA Technical Reports Server (NTRS)

    Peterson, Etta; Horz, Friedrich; Haynes, Gerald; See, Thomas

    1991-01-01

    Since the discovery of amino acids in the Murchison meteorite, much speculation has focused on their origin and subsequent alteration, including the possible role of secondary processes, both terrestrial and extraterrestrial. As collisional processes and associated shock waves seem to have affected the silicate portions of many primitive meteorites, a mixture of powdered Allende (125-150 m grain size) and nine synthetic amino acids (six protein and three nonprotein) were subjected to controlled shock pressures from 3 to 30 GPa to determine the effect of shocks on amino acid survivability. Preliminary characterizations of the recovered shock products are presented.

  19. Elucidation of the Dynamics for Hot-Spot Initiation at Nonuniform Interfaces of Highly Shocked Materials

    DTIC Science & Technology

    2011-12-07

    simulation cell with 3695375 independent atoms. For shock velocities of 2.5 and 3.5 km/s it takes ∼10 ps for the shock wave to traverse the interface. Such a...PBX during shock loading at Up = 2.5 km/s (for 6.0 ps ). The shading is based on the total slip in angstroms. This system is 54 nm thick in the shock...and compression strength.21 Each chain contains ten HTPB repeat units connected via one IPDI crosslinking molecule to four terminal HTPB repeat units

  20. Effects of damage on non-shock initiation of HMX-based explosives

    SciTech Connect

    Preston, Daniel N; Peterson, Paul D; Kien - Yin, Lee; Chavez, David E; Deluca, Racci; Avilucea, Gabriel; Hagelberg, Stephanie

    2009-01-01

    Structural damage in energetic materials plays a significant role in the probability of nonshock initiation events. Damage may occur in the form of voids or cracks either within crystals or in binder-rich regions between crystals. These cracks affect whether hotspots generated by impact will quench or propagate under non-shock insult. For this study, we have separately engineered intracrystalline and inter-crystalline cracks in to the HMX-based PBX 9501. Intra-crystalline cracks were created by subjecting HMX to forward and reverse solid-to-solid phase transformations prior to formulation. Inter-crystalline cracks were induced by compressing formulated samples of PBX 9501 at an average strain rate of 0.00285 S{sup -1}. Both sets of pre-damaged explosives were then impact tested using the LANL Type 12 Drop Weight-Impact Machine and their sensitivities compared to nondamaged PBX 9501. Results of these tests clearly show significant differences in sensitivity between damaged and non-damaged PBX 9501.

  1. A reactive burn model for shock initiation in a PBX: scaling and separability based on the hot spot concept

    SciTech Connect

    Show, Milton S; Menikoff, Ralph

    2010-01-01

    In the formulation of a reactive burn model for shock initiation, we endeavor to incorporate a number of effects based on the underlying physical concept of hot spot ignition followed by the growth of reaction due to diverging deflagration fronts. The passage of a shock front sets the initial condition for reaction, leading to a fraction of the hot spots that completely burn while others will quench. The form of the rate model is chosen to incorporate approximations based on the physical picture. In particular, the approximations imply scaling relations that are then used to mathematically separate various contributions. That is, the model is modular and refinements can be applied separately without changing the other contributions. For example, the effect of initial temperature, porosity, etc. predominantly enter the characterization of the non-quenching hot spot distribution. A large collection of velocity gauge data is shown to be well represented by the model with a very small number of parameters.

  2. Probabilistic Structural Analysis Program

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.

    2010-01-01

    NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.

  3. Elucidation of the Dynamics for Hot-Spot Initiation at Nonuniform Interfaces of Highly Shocked Materials

    DTIC Science & Technology

    2011-12-07

    simulation cell with 3695375 independent atoms. For shock velocities of 2.5 and 3.5 km/s it takes ∼10 ps for the shock wave to traverse the interface. Such a...Color online) Snapshot of PBX during shock loading at Up = 2.5 km/s (for 6.0 ps ). The shading is based on the total slip in angstroms. This system is...to the optimum viscosity and compression strength.21 Each chain contains ten HTPB repeat units connected via one IPDI crosslinking molecule to four

  4. A data-driven approach for determining time of initial movement in shock experiments using photonic Doppler velocimetry

    NASA Astrophysics Data System (ADS)

    Howard, Marylesa; Diaz, Abel; Briggs, Matthew E.; Crawford, Kristen; Dolan, D. H.; Furlanetto, Michael R.; Furnish, Michael D.; Holtkamp, David B.; Lone, B. M. La; Strand, Oliver T.; Stevens, Gerald D.; Tunnell, Thomas W.

    2017-01-01

    Photonic Doppler Velocimetry is an interferometric technique for measuring the beat frequency of a moving surface, from which the calculated velocity profile of the surface can be used to describe the physical changes the material undergoes after high-impact shock. Such a technique may also be used to characterize the performance of small detonators and determine the time at which the surface began moving. In this work, we develop a semi-automated technique for extracting the time of initial movement from a normalized lineout of the power spectrogram near the offset frequency of each probe. We characterize the response bias of this method and compare with the time of initial movement obtained by hand calculation of the raw voltage data. Results are shown on data from shock experiments such as gas gun setups and explosives-driven flyer plates.

  5. Probabilistic Models to Predict the Growth Initiation Time for Pseudomonas spp. in Processed Meats Formulated with NaCl and NaNO2.

    PubMed

    Jo, Hyunji; Park, Beomyoung; Oh, Mihwa; Gwak, Eunji; Lee, Heeyoung; Lee, Soomin; Yoon, Yohan

    2014-01-01

    This study developed probabilistic models to determine the initiation time of growth of Pseudomonas spp. in combinations with NaNO2 and NaCl concentrations during storage at different temperatures. The combination of 8 NaCl concentrations (0, 0.25, 0.5, 0.75, 1, 1.25, 1.5, and 1.75%) and 9 NaNO2 concentrations (0, 15, 30, 45, 60, 75, 90, 105, and 120 ppm) were prepared in a nutrient broth. The medium was placed in the wells of 96-well microtiter plates, followed by inoculation of a five-strain mixture of Pseudomonas in each well. All microtiter plates were incubated at 4, 7, 10, 12, and 15℃ for 528, 504, 504, 360 and 144 h, respectively. Growth (growth initiation; GI) or no growth was then determined by turbidity every 24 h. These growth response data were analyzed by a logistic regression to produce growth/no growth interface of Pseudomonas spp. and to calculate GI time. NaCl and NaNO2 were significantly effective (p<0.05) on inhibiting Pseudomonas spp. growth when stored at 4-12℃. The developed model showed that at lower NaCl concentration, higher NaNO2 level was required to inhibit Pseudomonas growth at 4-12℃. However, at 15℃, there was no significant effect of NaCl and NaNO2. The model overestimated GI times by 58.2±17.5 to 79.4±11%. These results indicate that the probabilistic models developed in this study should be useful in calculating the GI times of Pseudomonas spp. in combination with NaCl and NaNO2 concentrations, considering the over-prediction percentage.

  6. Thermomechanical damage of nucleosome by the shock wave initiated by ion passing through liquid water

    NASA Astrophysics Data System (ADS)

    Yakubovich, Alexander V.; Surdutovich, Eugene; Solov'yov, Andrey V.

    2012-05-01

    We report on the results of full-atom molecular dynamics simulations of the heat spike in the water medium caused by the propagation of the heavy ion in the vicinity of its Bragg peak. High rate of energy transfer from an ion to the molecules of surrounding water environment leads to the rapid increase of the temperature of the molecules in the vicinity of ions trajectory. As a result of an abrupt increase of the temperature we observe the formation of the nanoscale shock wave propagating through the medium. We investigate the thermomechanical damage caused by the shock wave to the nucleosome located in the vicinity of heavy ion trajectory. We observe the substantial deformation of the DNA secondary structure. We show that the produced shock wave can lead to the thermomechanical breakage of the DNA backbone covalent bonds and present estimates for the number of such strand brakes per one cell nucleus.

  7. Observation of dispersive shock waves developing from initial depressions in shallow water

    NASA Astrophysics Data System (ADS)

    Trillo, S.; Klein, M.; Clauss, G. F.; Onorato, M.

    2016-10-01

    We investigate surface gravity waves in a shallow water tank, in the limit of long wavelengths. We report the observation of non-stationary dispersive shock waves rapidly expanding over a 90 m flume. They are excited by means of a wave maker that allows us to launch a controlled smooth (single well) depression with respect to the unperturbed surface of the still water, a case that contains no solitons. The dynamics of the shock waves are observed at different levels of nonlinearity equivalent to a different relative smallness of the dispersive effect. The observed undulatory behavior is found to be in good agreement with the dynamics described in terms of a Korteweg-de Vries equation with evolution in space, though in the most nonlinear cases the description turns out to be improved over the quasi linear trailing edge of the shock by modeling the evolution in terms of the integro-differential (nonlocal) Whitham equation.

  8. A cumulative shear mechanism for tissue injury initiation in shock-wave lithotripsy

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan

    2007-11-01

    Considerable injury to renal tissue often accompanies treatment when shocks waves are delivered to break up kidney stones. The most severe injuries seem to involve cavitation damage, driven by the expansive portion of the lithotripor's wave. However, data from animal studies indicate that inverted shock waves, which should preclude cavitation, still cause local injury near the tip of the renal papilla, which seems particularly susceptible to injury in general. We develop a model of papilla tissue, which consists mostly of parallel fluid filled elastic 10 to 30μm diameter tubules, to assess whether or not the shear of repeated shocks can accumulate to cause injury. Material properties are estimated from reported measurements of renal basement membranes. A Stokes-flow boundary integral algorithm is used to estimate the net viscoelastic properties of the tissue. It is predicted that the particular microstructure of the tissue near the tip of the papilla is indeed susceptible to shear accumulation as consistent with several observations.

  9. Shock initiation behavior of PBXN-9 determined by gas gun experiments

    NASA Astrophysics Data System (ADS)

    Sanchez, Nathaniel; Gustavsen, Richard; Hooks, Daniel

    2009-06-01

    The shock to detonation transition was evaluated in the HMX based explosive PBXN-9 by a series of light-gas gun experiments. PBXN-9 consists of 92 wt% HMX, 2wt% Hycar 4054 & 6 wt% dioctyl adipate with a density of 1.75 g/cm^3 and 0.8% voids. The experiments were designed to understand the specifics of wave evolution and the run distance to detonation as a function of input shock pressure. These experiments were conducted on gas guns in order to vary the input shock pressure accurately. The primary diagnostics are embedded magnetic gauges which are based on Faraday's law of induction along with photon Doppler velocimetry (PDV). The run distance to detonation vs. shock pressure, or ``Pop plot,'' was redefined as log (X*) = 2.14 -- 1.82 log (P), which is substantially different than previous data. The Hugoniot was refined as Us = 2.32 + 2.21 Up. This data will be useful for the development of predictive models for the safety and performance of PBXN-9 in addition to providing an increased understanding of HMX based explosives in varying formulations.

  10. Shock Initiation Behavior of PBXN-9 Determined by Gas Gun Experiments

    NASA Astrophysics Data System (ADS)

    Sanchez, N. J.; Gustavsen, R. L.; Hooks, D. E.

    2009-12-01

    The shock to detonation transition was evaluated in the HMX based explosive PBXN-9 by a series of light-gas gun experiments. PBXN-9 consists of 92 wt% HMX, 2wt% Hycar 4054 & 6 wt&percent; dioctyl adipate with a density of 1.75 g/cm3 and 0.8&% voids. The experiments were designed to understand the specifics of wave evolution and the run distance to detonation as a function of input shock pressure. These experiments were conducted on gas guns in order to vary the input shock pressure accurately. The primary diagnostics were embedded magnetic gauges, which are based on Faraday's law of induction, and Photon Doppler Velocimetry (PDV). The run distance to detonation vs. shock pressure, or "Pop plot," was redefined as log(X) = 2.14-1.82 log (P), which is substantially different than previous data. The Hugoniot was refined as Us = 2.32+2.211 Up. This data will be useful for the development of predictive models for the safety and performance of PBXN-9 along with providing increased understanding of HMX based explosives in varying formulations.

  11. Shock initiation behavior of PBXN-9 determined by gas gun experiments

    SciTech Connect

    Sanchez, Nathaniel J; Gustavsen, Richard L; Hooks, Daniel E

    2009-01-01

    The shock to detonation transition was evaluated in the HMX based explosive PBXN-9 by a series of light-gas gun experiments. PBXN-9 consists of 92 wt% HMX, 2wt% Hycar 4054 & 6 wt% dioctyl adipate with a density of 1.75 g/cm{sup 3} and 0.8% voids. The experiments were designed to understand the specifics of wave evolution and the run distance to detonation as a function of input shock pressure. These experiments were conducted on gas guns in order to vary the input shock pressure accurately. The primary diagnostics were embedded magnetic gauges, which are based on Faraday's law of induction, and Photon Doppler Velocimetry (PDV). The run distance to detonation vs. shock pressure, or 'Pop plot,' was redefined as log(X*) = 2.14-1.82 log(P), which is substantially different than previous data. The Hugoniot was refined as U{sub s} = 2.32 + 2.21 U{sub p}. This data will be useful for the development of predictive models for the safety and performance of PBXN-9 along with providing increased understanding of HMX based explosives in varying formulations.

  12. Probabilistic Risk Analysis and Fault Trees as Tools in Improving the Delineation of Wellhead Protection Areas: An Initial Discussion

    NASA Astrophysics Data System (ADS)

    Rodak, C. M.; Silliman, S. E.

    2010-12-01

    Delineation of a wellhead protection area (WHPA) is a critical component of managing / protecting the aquifer(s) supplying potable water to a public water-supply well. While a number of previous authors have addressed questions related to uncertainties in advective capture zones, methods for assessing WHPAs in the presence of uncertainty in the chemistry of groundwater contaminants, the relationship between land-use and contaminant sources, and the impact on health risk within the receiving population are more limited. Probabilistic risk analysis (PRA) combined with fault trees (FT) addresses this latter challenge by providing a structure whereby four key WHPA issues may be addressed: (i) uncertainty in land-use practices and chemical release, (ii) uncertainty in groundwater flow, (iii) variability in natural attenuation properties (and/or remediation) of the contaminants, and (iv) estimated health risk from contaminant arrival at a well. The potential utility of PRA-FT in this application is considered through a simplified case study involving management decisions related both to regional land use planning and local land-use zoning regulation. An application-specific fault tree is constructed to visualize and identify the events required for health risk failure at the well and a Monte Carlo approach is used to create multiple realizations of groundwater flow and chemical transport to a well in a model of a simple, unconfined aquifer. Model parameters allowed to vary during this simplified case study include hydraulic conductivity, probability of a chemical spill (related to land use variation in space), and natural attenuation through variation in rate of decay of the contaminant. Numerical results are interpreted in association with multiple land-use management scenarios as well as multiple cancer risk assumptions regarding the contaminant arriving at the well. This case study shows significant variability of health risk at the well, however general trends were

  13. The Role of the Membrane-Initiated Heat Shock Response in Cancer

    PubMed Central

    Bromberg, Zohar; Weiss, Yoram

    2016-01-01

    The heat shock response (HSR) is a cellular response to diverse environmental and physiological stressors resulting in the induction of genes encoding molecular chaperones, proteases, and other proteins that are essential for protection and recovery from cellular damage. Since different perturbations cause accumulation of misfolded proteins, cells frequently encounter fluctuations in the environment which alter proteostasis. Since tumor cells use their natural adaptive mechanism of coping with stress and misfolded proteins, in recent years, the proteostasis network became a promising target for anti-tumor therapy. The membrane is the first to be affected by heat shock and therefore may be the first one to sense heat shock. The membrane also connects between the extracellular and the intracellular signals. Hence, there is a “cross talk” between the HSR and the membranes since heat shock can induce changes in the fluidity of membranes, leading to membrane lipid remodeling that occurs in several diseases such as cancer. During the last decade, a new possible therapy has emerged in which an external molecule is used that could induce membrane lipid re-organization. Since at the moment there are very few substances that regulate the HSR effectively, an alternative way has been searched to modulate chaperone activities through the plasma membrane. Recently, we suggested that the use of the membrane Transient Receptor Potential Vanilloid-1 (TRPV1) modulators regulated the HSR in cancer cells. However, the primary targets of the signal transduction pathway are yet un-known. This review provides an overview of the current literature regarding the role of HSR in membrane remodeling in cancer since a deep understanding of the membrane biology in cancer and the membrane heat sensing pathway is essential to design novel efficient therapies. PMID:27200359

  14. Numerical simulation of increasing initial perturbations of a bubble in the bubble-shock interaction problem

    NASA Astrophysics Data System (ADS)

    Korneev, Boris; Levchenko, Vadim

    2016-12-01

    A set of numerical experiments on the interaction between a planar shock wave and a spherical bubble with a slightly perturbed surface is considered. Spectral analysis of the instability growth is carried out and three-dimensional Euler equations of fluid dynamics are chosen as the mathematical model for the process. The equations are solved via the Runge-Kutta discontinuous Galerkin method and the special DiamondTorre algorithm for multi-GPU implementation is used.

  15. Measurements of shock initiation in the tri-amino-tri-nitro-benzene based explosive PBX 9502: Wave forms from embedded gauges and comparison of four different material lots

    NASA Astrophysics Data System (ADS)

    Gustavsen, R. L.; Sheffield, S. A.; Alcon, R. R.

    2006-06-01

    We have completed a series of ambient temperature (23+/-2 °C) shock initiation experiments on four lots (batches) of the insensitive high explosive PBX 9502. PBX 9502 consists by weight of 95% dry-aminated tri-amino-tri-nitro-benzene (TATB) and 5% of the plastic binder Kel-F 800, a 3/1 copolymer of chloro-trifluoro-ethylene and vinylidene-fluoride. Two of the four lots were manufactured using the ``virgin'' process. Both of these lots had few fine TATB particles. One virgin lot was stored the majority of its life (>15 yr) as a molding powder and pressed as a 240 mm diameter by 130 mm thick cylinder. The other virgin lot was stored the majority of its life as a hollow hemispherical pressing. Two lots were manufactured using the ``recycle'' process and had many fine TATB particles. One recycled lot was stored the majority of its life as a molding powder, while the other was stored as a pressed charge. Shock initiation experiments were performed using precisely characterized planar shocks generated by impacting an explosive sample with a projectile accelerated in a two-stage gas gun. The evolution of the shock into a detonation was measured using 10 or 11 embedded electromagnetic particle velocity gauges and three ``shock tracker'' gauges. Results include the following: (1) high quality particle velocity wave forms which should be useful for calibrating reactive burn models, (2) no difference in the sustained shock initiation response between lots regardless of material processing or storage history, (3) responses for all lots equivalent to those measured by Dick et al. [J. Appl. Phys. 63, 4881 (1988)], additional Hugoniot and Pop-plot data for PBX 9502, and (5) the short shock response which, when compared to the sustained shock response, shows no extension in the run distance unless the rarefaction overtakes the shock front prior to the distance it would have run towards a detonation as a sustained shock.

  16. Motivational Modulation of Self-Initiated and Externally Triggered Movement Speed Induced by Threat of Shock: Experimental Evidence for Paradoxical Kinesis in Parkinson’s Disease

    PubMed Central

    McDonald, Louise M.; Griffin, Harry J.; Angeli, Aikaterini; Torkamani, Mariam; Georgiev, Dejan; Jahanshahi, Marjan

    2015-01-01

    Background Paradoxical kinesis has been observed in bradykinetic people with Parkinson’s disease. Paradoxical kinesis occurs in situations where an individual is strongly motivated or influenced by relevant external cues. Our aim was to induce paradoxical kinesis in the laboratory. We tested whether the motivation of avoiding a mild electric shock was sufficient to induce paradoxical kinesis in externally-triggered and self-initiated conditions in people with Parkinson’s disease tested on medication and in age-matched controls. Methods Participants completed a shock avoidance behavioural paradigm in which half of the trials could result in a mild electric shock if the participant did not move fast enough. Half of the trials of each type were self-initiated and half were externally-triggered. The criterion for avoiding shock was a maximum movement time, adjusted according to each participant’s performance on previous trials using a staircase tracking procedure. Results On trials with threat of shock, both patients with Parkinson’s disease and controls had faster movement times compared to no potential shock trials, in both self-initiated and externally-triggered conditions. The magnitude of improvement of movement time from no potential shock to potential shock trials was positively correlated with anxiety ratings. Conclusions When motivated to avoid mild electric shock, patients with Parkinson’s disease, similar to healthy controls, showed significant speeding of movement execution. This was observed in both self-initiated and externally-triggered versions of the task. Nevertheless, in the ET condition the improvement of reaction times induced by motivation to avoid shocks was greater for the PD patients than controls, highlighting the value of external cues for movement initiation in PD patients. The magnitude of improvement from the no potential shock to the potential shock trials was associated with the threat-induced anxiety. This demonstration of

  17. Early growth response 1 mediates the systemic and hepatic inflammatory response initiated by hemorrhagic shock.

    PubMed

    Prince, Jose M; Ming, Mei Jian; Levy, Ryan M; Liu, Shubing; Pinsky, David J; Vodovotz, Yoram; Billiar, Timothy R

    2007-02-01

    Hemorrhagic shock (HS) is a major cause of morbidity and mortality in trauma patients. The early growth response 1 (Egr-1) transcription factor is induced by a variety of cellular stresses, including hypoxia, and may function as a master switch to trigger the expression of numerous key inflammatory mediators. We hypothesized that HS would induce hepatic expression of Egr-1 and that Egr-1 upregulates the inflammatory response after HS. The Egr-1 mice and wild-type (WT) controls (n>or=5 for all groups) were subjected to HS alone or HS followed by resuscitation (HS/R). Other mice were subjected to a sham procedure which included general anesthesia and vessel cannulation but no shock (sham). After the HS, HS/R, or sham procedures, mice were euthanized for determination of serum concentrations of interleukin (IL) 6, IL-10, and alanine aminotransferase. Northern blot analysis was performed to evaluate Egr-1 messenger RNA (mRNA) expression. Liver whole cell lysates were evaluated for Egr-1 protein expression by Western blot analysis. Hepatic expression of IL-6, granulocyte colony-stimulating factor, and intracellular adhesion molecule 1 mRNA was determined by semiquantitative reverse transcriptase-polymerase chain reaction. The Egr-1 DNA binding was assessed using the electrophoretic mobility shift assay. Hemorrhagic shock results in a rapid and transient hepatic expression of Egr-1 mRNA in WT mice by 1 h, whereas protein and DNA binding activity was evident by 2.5 h. The Egr-1 mRNA expression diminished after 4 h of resuscitation, whereas Egr-1 protein expression and DNA binding activity persisted through resuscitation. The Egr-1 mice exhibited decreased levels of hepatic inflammatory mediators compared with WT controls with a decrease in hepatic mRNA levels of IL-6 by 42%, granulocyte colony-stimulating factor by 39%, and intracellular adhesion molecule 1 by 43%. Similarly, Egr-1 mice demonstrated a decreased systemic inflammatory response and hepatic injury after HS

  18. SHOCK INITIATION EXPERIMENTS ON PBX 9501 EXPLOSIVE AT PRESSURES BELOW 3 GPa WITH ASSOCIATED IGNITION AND GROWTH MODELING

    SciTech Connect

    Chidester, S K; Thompson, D G; Vandersall, K S; Idar, D J; Tarver, C M; Garcia, F; Urtiew, P A

    2007-06-13

    Shock initiation experiments on the explosive PBX 9501 (95% HMX, 2.5% estane, and 2.5% nitroplasticizer by weight) were performed at pressures below 3 GPa to obtain in-situ pressure gauge data, run-distance-to-detonation thresholds, and Ignition and Growth modeling parameters. Propellant driven gas guns (101 mm and 155 mm) were utilized to initiate the PBX 9501 explosive with manganin piezoresistive pressure gauge packages placed between sample slices. The run-distance-to-detonation points on the Pop-plot for these experiments showed agreement with previously published data and Ignition and Growth modeling parameters were obtained with a good fit to the experimental data. This parameter set will allow accurate code predictions to be calculated for safety scenarios in the low-pressure regime (below 3 GPa) involving PBX 9501 explosive.

  19. Using probabilistic terrorism risk modeling for regulatory benefit-cost analysis: application to the Western hemisphere travel initiative in the land environment.

    PubMed

    Willis, Henry H; LaTourrette, Tom

    2008-04-01

    This article presents a framework for using probabilistic terrorism risk modeling in regulatory analysis. We demonstrate the framework with an example application involving a regulation under consideration, the Western Hemisphere Travel Initiative for the Land Environment, (WHTI-L). First, we estimate annualized loss from terrorist attacks with the Risk Management Solutions (RMS) Probabilistic Terrorism Model. We then estimate the critical risk reduction, which is the risk-reducing effectiveness of WHTI-L needed for its benefit, in terms of reduced terrorism loss in the United States, to exceed its cost. Our analysis indicates that the critical risk reduction depends strongly not only on uncertainties in the terrorism risk level, but also on uncertainty in the cost of regulation and how casualties are monetized. For a terrorism risk level based on the RMS standard risk estimate, the baseline regulatory cost estimate for WHTI-L, and a range of casualty cost estimates based on the willingness-to-pay approach, our estimate for the expected annualized loss from terrorism ranges from $2.7 billion to $5.2 billion. For this range in annualized loss, the critical risk reduction for WHTI-L ranges from 7% to 13%. Basing results on a lower risk level that results in halving the annualized terrorism loss would double the critical risk reduction (14-26%), and basing the results on a higher risk level that results in a doubling of the annualized terrorism loss would cut the critical risk reduction in half (3.5-6.6%). Ideally, decisions about terrorism security regulations and policies would be informed by true benefit-cost analyses in which the estimated benefits are compared to costs. Such analyses for terrorism security efforts face substantial impediments stemming from the great uncertainty in the terrorist threat and the very low recurrence interval for large attacks. Several approaches can be used to estimate how a terrorism security program or regulation reduces the

  20. Appropriate evaluation and treatment of heart failure patients after implantable cardioverter-defibrillator discharge: time to go beyond the initial shock.

    PubMed

    Mishkin, Joseph D; Saxonhouse, Sherry J; Woo, Gregory W; Burkart, Thomas A; Miles, William M; Conti, Jamie B; Schofield, Richard S; Sears, Samuel F; Aranda, Juan M

    2009-11-24

    Multiple clinical trials support the use of implantable cardioverter-defibrillators (ICDs) for prevention of sudden cardiac death in patients with heart failure (HF). Unfortunately, several complicating issues have arisen from the universal use of ICDs in HF patients. An estimated 20% to 35% of HF patients who receive an ICD for primary prevention will experience an appropriate shock within 1 to 3 years of implant, and one-third of patients will experience an inappropriate shock. An ICD shock is associated with a 2- to 5-fold increase in mortality, with the most common cause being progressive HF. The median time from initial ICD shock to death ranges from 168 to 294 days depending on HF etiology and the appropriateness of the ICD therapy. Despite this prognosis, current guidelines do not provide a clear stepwise approach to managing these high-risk patients. An ICD shock increases HF event risk and should trigger a thorough evaluation to determine the etiology of the shock and guide subsequent therapeutic interventions. Several combinations of pharmacologic and device-based interventions such as adding amiodarone to baseline beta-blocker therapy, adjusting ICD sensitivity, and employing antitachycardia pacing may reduce future appropriate and inappropriate shocks. Aggressive HF surveillance and management is required after an ICD shock, as the risk of sudden cardiac death is transformed to an increased HF event risk.

  1. Effect of alcohol addition on shock-initiated formation of soot from benzene

    NASA Technical Reports Server (NTRS)

    Frenklach, Michael; Yuan, Tony

    1988-01-01

    Soot formation in benzene-methanol and benzene-ethanol argon-diluted mixtures was studied behind reflected shock waves by monitoring the attenuation of an He-Ne laser beam. The experiments were performed at temperatures 1580-2250 K, pressures 2.0-3.0 bar, and total carbon atom concentrations (2.0-2.7) x 10 to the 17th atoms/cu cm. The results obtained indicate that the addition of alcohol suppresses the formation of soot from benzene at all temperatures, and that the reduction in soot yields is increased with the amount of alcohol added. The analysis of the results indicates that the suppression effect is probably due to the oxidation of soot and soot precursors by OH and the removal of hydrogen atoms by alcohol and water molecules.

  2. Unilateral pulmonary edema: a rare initial presentation of cardiogenic shock due to acute myocardial infarction.

    PubMed

    Shin, Jeong Hun; Kim, Seok Hwan; Park, Jinkyu; Lim, Young-Hyo; Park, Hwan-Cheol; Choi, Sung Il; Shin, Jinho; Kim, Kyung-Soo; Kim, Soon-Gil; Hong, Mun K; Lee, Jae Ung

    2012-02-01

    Cardiogenic unilateral pulmonary edema (UPE) is a rare clinical entity that is often misdiagnosed at first. Most cases of cardiogenic UPE occur in the right upper lobe and are caused by severe mitral regurgitation (MR). We present an unusual case of right-sided UPE in a patient with cardiogenic shock due to acute myocardial infarction (AMI) without severe MR. The patient was successfully treated by percutaneous coronary intervention and medical therapy for heart failure. Follow-up chest Radiography showed complete resolution of the UPE. This case reminds us that AMI can present as UPE even in patients without severe MR or any preexisting pulmonary disease affecting the vasculature or parenchyma of the lung.

  3. Scaling effect for HF chain chemical laser initiated by a standing shock wave

    NASA Astrophysics Data System (ADS)

    Mel'nikov, Igor V.; Stepanov, A. A.; Shcheglov, V. A.

    1994-06-01

    The scaling theory is exploited for a cw chain HF laser initiated by a stationary detonation wave. This provides us with a fast and accurate method of estimating the output parameters of the laser at different compositions of the initial mixture. The comparative analysis to numerical simulation is performed and demonstrates a reasonable degree of accuracy using our method.

  4. Overview of Probabilistic Methods for SAE G-11 Meeting for Reliability and Uncertainty Quantification for DoD TACOM Initiative with SAE G-11 Division

    NASA Technical Reports Server (NTRS)

    Singhal, Surendra N.

    2003-01-01

    The SAE G-11 RMSL Division and Probabilistic Methods Committee meeting during October 6-8 at the Best Western Sterling Inn, Sterling Heights (Detroit), Michigan is co-sponsored by US Army Tank-automotive & Armaments Command (TACOM). The meeting will provide an industry/government/academia forum to review RMSL technology; reliability and probabilistic technology; reliability-based design methods; software reliability; and maintainability standards. With over 100 members including members with national/international standing, the mission of the G-11's Probabilistic Methods Committee is to "enable/facilitate rapid deployment of probabilistic technology to enhance the competitiveness of our industries by better, faster, greener, smarter, affordable and reliable product development."

  5. (U) Analysis of shock-initiated PBX-9501 through porous CeO2

    SciTech Connect

    Fredenburg, David A.; Dattelbaum, Dana Mcgraw; Dennis-Koller, Darcie

    2015-07-24

    The attenuation properties of an impact initiated PBX-9501 explosive through several thicknesses of CeO2 powder is investigated. The CeO2 is at an initial porous density of 4.0 g/cm3 , roughly 55 % of theoretical maximum density. Measurements of the input (into the powder) and propagated (through the powder) wave profiles are measured using optical velocimetry. Results show a reduction of the average wave speed, CX, and peak steady-state material velocity, uP , with increasing powder thickness from 1.5 - 5.0 mm.

  6. Constitutively Active Acetylcholine-Dependent Potassium Current Increases Atrial Defibrillation Threshold by Favoring Post-Shock Re-Initiation.

    PubMed

    Bingen, Brian O; Askar, Saïd F A; Neshati, Zeinab; Feola, Iolanda; Panfilov, Alexander V; de Vries, Antoine A F; Pijnappels, Daniël A

    2015-10-21

    Electrical cardioversion (ECV), a mainstay in atrial fibrillation (AF) treatment, is unsuccessful in up to 10-20% of patients. An important aspect of the remodeling process caused by AF is the constitutive activition of the atrium-specific acetylcholine-dependent potassium current (IK,ACh → IK,ACh-c), which is associated with ECV failure. This study investigated the role of IK,ACh-c in ECV failure and setting the atrial defibrillation threshold (aDFT) in optically mapped neonatal rat cardiomyocyte monolayers. AF was induced by burst pacing followed by application of biphasic shocks of 25-100 V to determine aDFT. Blocking IK,ACh-c by tertiapin significantly decreased DFT, which correlated with a significant increase in wavelength during reentry. Genetic knockdown experiments, using lentiviral vectors encoding a Kcnj5-specific shRNA to modulate IK,ACh-c, yielded similar results. Mechanistically, failed ECV was attributed to incomplete phase singularity (PS) removal or reemergence of PSs (i.e. re-initiation) through unidirectional propagation of shock-induced action potentials. Re-initiation occurred at significantly higher voltages than incomplete PS-removal and was inhibited by IK,ACh-c blockade. Whole-heart mapping confirmed our findings showing a 60% increase in ECV success rate after IK,ACh-c blockade. This study provides new mechanistic insight into failing ECV of AF and identifies IK,ACh-c as possible atrium-specific target to increase ECV effectiveness, while decreasing its harmfulness.

  7. Constitutively Active Acetylcholine-Dependent Potassium Current Increases Atrial Defibrillation Threshold by Favoring Post-Shock Re-Initiation

    PubMed Central

    Bingen, Brian O.; Askar, Saïd F. A.; Neshati, Zeinab; Feola, Iolanda; Panfilov, Alexander V.; de Vries, Antoine A. F.; Pijnappels, Daniël A.

    2015-01-01

    Electrical cardioversion (ECV), a mainstay in atrial fibrillation (AF) treatment, is unsuccessful in up to 10–20% of patients. An important aspect of the remodeling process caused by AF is the constitutive activition of the atrium-specific acetylcholine-dependent potassium current (IK,ACh → IK,ACh-c), which is associated with ECV failure. This study investigated the role of IK,ACh-c in ECV failure and setting the atrial defibrillation threshold (aDFT) in optically mapped neonatal rat cardiomyocyte monolayers. AF was induced by burst pacing followed by application of biphasic shocks of 25–100 V to determine aDFT. Blocking IK,ACh-c by tertiapin significantly decreased DFT, which correlated with a significant increase in wavelength during reentry. Genetic knockdown experiments, using lentiviral vectors encoding a Kcnj5-specific shRNA to modulate IK,ACh-c, yielded similar results. Mechanistically, failed ECV was attributed to incomplete phase singularity (PS) removal or reemergence of PSs (i.e. re-initiation) through unidirectional propagation of shock-induced action potentials. Re-initiation occurred at significantly higher voltages than incomplete PS-removal and was inhibited by IK,ACh-c blockade. Whole-heart mapping confirmed our findings showing a 60% increase in ECV success rate after IK,ACh-c blockade. This study provides new mechanistic insight into failing ECV of AF and identifies IK,ACh-c as possible atrium-specific target to increase ECV effectiveness, while decreasing its harmfulness. PMID:26487066

  8. Trans sodium crocetinate for hemorrhagic shock: effect of time delay in initiating therapy.

    PubMed

    Giassi, Lisa J; Poynter, A Kennon; Gainer, John L

    2002-12-01

    A new drug, trans sodium crocetinate (TSC), has been suggested for use in resuscitation after trauma. TSC has been shown to increase survival in a rat model of hemorrhagic shock. It also results in an increase in blood pressure and a decrease in plasma lactate levels when given immediately after hemorrhage. TSC increases whole-body oxygen consumption rates, and it is thought that its physiological effects are due to the increased oxygen availability. In fact, TSC therapy and 100% oxygen therapy show similar results when used in the same rat hemorrhage model. It has been suggested, however, that 100% oxygen therapy is effective only if begun immediately after hemorrhage. Such a window of opportunity has been said to exist for other resuscitation methods; thus, the current study is to determine if this is true for TSC. In one series of experiments, rats were bled 60% of their blood volumes and given an injection of TSC (or saline) 20 min after the hemorrhage ended. The injection was then repeated four times, spaced 10 min apart. Thirty minutes after the final injection, the animals were infused with normal saline. TSC again restored blood pressure and other parameters, but repeated dosing was necessary. In addition, this therapy prevented an increase in liver enzymes (transaminases) as measured 24 h after hemorrhage. In a second study, rats were bled 60% of their blood volumes, followed by a second bleeding (an additional 10%) done 10 min later. No subsequent fluid was infused in this group. The majority of the animals treated with TSC after the second hemorrhage survived, whereas the controls did not. These data suggest that TSC is effective when given after a delay. The dosing regimen must be different, however, presumably because of the blood acidosis that develops after hemorrhage. The results also suggest that TSC may be protective against secondary liver damage resulting from trauma.

  9. Initialization shock in decadal hindcasts due to errors in wind stress over the tropical Pacific

    NASA Astrophysics Data System (ADS)

    Pohlmann, Holger; Kröger, Jürgen; Greatbatch, Richard J.; Müller, Wolfgang A.

    2016-12-01

    Low prediction skill in the tropical Pacific is a common problem in decadal prediction systems, especially for lead years 2-5 which, in many systems, is lower than in uninitialized experiments. On the other hand, the tropical Pacific is of almost worldwide climate relevance through its teleconnections with other tropical and extratropical regions and also of importance for global mean temperature. Understanding the causes of the reduced prediction skill is thus of major interest for decadal climate predictions. We look into the problem of reduced prediction skill by analyzing the Max Planck Institute Earth System Model (MPI-ESM) decadal hindcasts for the fifth phase of the Climate Model Intercomparison Project and performing a sensitivity experiment in which hindcasts are initialized from a model run forced only by surface wind stress. In both systems, sea surface temperature variability in the tropical Pacific is successfully initialized, but most skill is lost at lead years 2-5. Utilizing the sensitivity experiment enables us to pin down the reason for the reduced prediction skill in MPI-ESM to errors in wind stress used for the initialization. A spurious trend in the wind stress forcing displaces the equatorial thermocline in MPI-ESM unrealistically. When the climate model is then switched into its forecast mode, the recovery process triggers artificial El Niño and La Niña events at the surface. Our results demonstrate the importance of realistic wind stress products for the initialization of decadal predictions.

  10. The Panchromatic Hubble Andromeda Treasury. IV. A Probabilistic Approach to Inferring the High-mass Stellar Initial Mass Function and Other Power-law Functions

    NASA Astrophysics Data System (ADS)

    Weisz, Daniel R.; Fouesneau, Morgan; Hogg, David W.; Rix, Hans-Walter; Dolphin, Andrew E.; Dalcanton, Julianne J.; Foreman-Mackey, Daniel T.; Lang, Dustin; Johnson, L. Clifton; Beerman, Lori C.; Bell, Eric F.; Gordon, Karl D.; Gouliermis, Dimitrios; Kalirai, Jason S.; Skillman, Evan D.; Williams, Benjamin F.

    2013-01-01

    We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M >~ 1 M ⊙). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, α, are unbiased and that the uncertainty, Δα, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on α, and provide an analytic approximation for Δα as a function of the observed number of stars and mass range. Comparison with literature studies shows that ~3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield langαrang = 2.46, with a 1σ dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the completeness for stars of a given mass. The precision on MF

  11. THE PANCHROMATIC HUBBLE ANDROMEDA TREASURY. IV. A PROBABILISTIC APPROACH TO INFERRING THE HIGH-MASS STELLAR INITIAL MASS FUNCTION AND OTHER POWER-LAW FUNCTIONS

    SciTech Connect

    Weisz, Daniel R.; Fouesneau, Morgan; Dalcanton, Julianne J.; Clifton Johnson, L.; Beerman, Lori C.; Williams, Benjamin F.; Hogg, David W.; Foreman-Mackey, Daniel T.; Rix, Hans-Walter; Gouliermis, Dimitrios; Dolphin, Andrew E.; Lang, Dustin; Bell, Eric F.; Gordon, Karl D.; Kalirai, Jason S.; Skillman, Evan D.

    2013-01-10

    We present a probabilistic approach for inferring the parameters of the present-day power-law stellar mass function (MF) of a resolved young star cluster. This technique (1) fully exploits the information content of a given data set; (2) can account for observational uncertainties in a straightforward way; (3) assigns meaningful uncertainties to the inferred parameters; (4) avoids the pitfalls associated with binning data; and (5) can be applied to virtually any resolved young cluster, laying the groundwork for a systematic study of the high-mass stellar MF (M {approx}> 1 M {sub Sun }). Using simulated clusters and Markov Chain Monte Carlo sampling of the probability distribution functions, we show that estimates of the MF slope, {alpha}, are unbiased and that the uncertainty, {Delta}{alpha}, depends primarily on the number of observed stars and on the range of stellar masses they span, assuming that the uncertainties on individual masses and the completeness are both well characterized. Using idealized mock data, we compute the theoretical precision, i.e., lower limits, on {alpha}, and provide an analytic approximation for {Delta}{alpha} as a function of the observed number of stars and mass range. Comparison with literature studies shows that {approx}3/4 of quoted uncertainties are smaller than the theoretical lower limit. By correcting these uncertainties to the theoretical lower limits, we find that the literature studies yield ({alpha}) = 2.46, with a 1{sigma} dispersion of 0.35 dex. We verify that it is impossible for a power-law MF to obtain meaningful constraints on the upper mass limit of the initial mass function, beyond the lower bound of the most massive star actually observed. We show that avoiding substantial biases in the MF slope requires (1) including the MF as a prior when deriving individual stellar mass estimates, (2) modeling the uncertainties in the individual stellar masses, and (3) fully characterizing and then explicitly modeling the

  12. Heat shock protein DNAJB8 is a novel target for immunotherapy of colon cancer-initiating cells.

    PubMed

    Morita, Rena; Nishizawa, Satoshi; Torigoe, Toshihiko; Takahashi, Akari; Tamura, Yasuaki; Tsukahara, Tomohide; Kanaseki, Takayuki; Sokolovskaya, Alice; Kochin, Vitaly; Kondo, Toru; Hashino, Satoshi; Asaka, Masahiro; Hara, Isao; Hirohashi, Yoshihiko; Sato, Noriyuki

    2014-04-01

    The aim of the present study was to establish cancer stem-like cell/cancer-initiating cell (CSC/CIC)-targeting immunotherapy. The CSC/CIC are thought to be essential for tumor maintenance, recurrence and distant metastasis. Therefore they are reasonable targets for cancer therapy. In the present study, we found that a heat shock protein (HSP) 40 family member, DnaJ (Hsp40) homolog, subfamily B, member 8 (DNAJB8), is preferentially expressed in CSC/CIC derived from colorectal cancer (CRC) cells rather than in non-CSC/CIC. Overexpression of DNAJB8 enhanced the expression of stem cell markers and tumorigenicity, indicating that DNAJB8 has a role in CRC CSC/CIC. A DNAJB8-specific cytotoxic T lymphocyte (CTL) response could be induced by a DNAJB8-derived antigenic peptide. A CTL clone specific for DNAJB8 peptide showed higher killing activity to CRC CSC/CIC compared with non-CSC/CIC, and CTL adoptive transfer into CRC CSC/CIC showed an antitumor effect in vivo. Taken together, the results indicate that DNAJB8 is expressed and has role in CRC CSC/CIC and that DNAJB8 is a novel target of CRC CSC/CIC-targeting immunotherapy.

  13. Physiopathology of shock

    PubMed Central

    Bonanno, Fabrizio Giuseppe

    2011-01-01

    Shock syndromes are of three types: cardiogenic, hemorrhagic and inflammatory. Hemorrhagic shock has its initial deranged macro-hemodynamic variables in the blood volume and venous return. In cardiogenic shock there is a primary pump failure that has cardiac output/mean arterial pressure as initial deranged variables. In Inflammatory Shock it is the microcirculation that is mainly affected, while the initial deranged macrocirculation variable is the total peripheral resistance hit by systemic inflammatory response. PMID:21769210

  14. Hot spot formation and chemical reaction initiation in shocked HMX crystals with nanovoids: a large-scale reactive molecular dynamics study.

    PubMed

    Zhou, Tingting; Lou, Jianfeng; Zhang, Yangeng; Song, Huajie; Huang, Fenglei

    2016-07-14

    We report million-atom reactive molecular dynamic simulations of shock initiation of β-cyclotetramethylene tetranitramine (β-HMX) single crystals containing nanometer-scale spherical voids. Shock induced void collapse and subsequent hot spot formation as well as chemical reaction initiation are observed which depend on the void size and impact strength. For an impact velocity of 1 km s(-1) and a void radius of 4 nm, the void collapse process includes three stages; the dominant mechanism is the convergence of upstream molecules toward the centerline and the downstream surface of the void forming flowing molecules. Hot spot formation also undergoes three stages, and the principal mechanism is kinetic energy transforming to thermal energy due to the collision of flowing molecules on the downstream surface. The high temperature of the hot spot initiates a local chemical reaction, and the breakage of the N-NO2 bond plays the key role in the initial reaction mechanism. The impact strength and void size have noticeable effects on the shock dynamical process, resulting in a variation of the predominant mechanisms leading to void collapse and hot spot formation. Larger voids or stronger shocks result in more intense hot spots and, thus, more violent chemical reactions, promoting more reaction channels and generating more reaction products in a shorter duration. The reaction products are mainly concentrated in the developed hot spot, indicating that the chemical reactivity of the hmx crystal is greatly enhanced by void collapse. The detailed information derived from this study can aid a thorough understanding of the role of void collapse in hot spot formation and the chemical reaction initiation of explosives.

  15. Geometrical shock dynamics of fast magnetohydrodynamic shocks

    NASA Astrophysics Data System (ADS)

    Mostert, Wouter; Pullin, Dale I.; Samtaney, Ravi; Wheatley, Vincent

    2016-11-01

    We extend the theory of geometrical shock dynamics (GSD, Whitham 1958), to two-dimensional fast magnetohydrodynamic (MHD) shocks moving in the presence of nonuniform magnetic fields of general orientation and strength. The resulting generalized area-Mach number rule is adapted to MHD shocks moving in two spatial dimensions. A partially-spectral numerical scheme developed from that of Schwendeman (1993) is described. This is applied to the stability of plane MHD fast shocks moving into a quiescent medium containing a uniform magnetic field whose field lines are inclined to the plane-shock normal. In particular, we consider the time taken for an initially planar shock subject to an initial perturbed magnetosonic Mach number distribution, to first form shock-shocks. Supported by KAUST OCRF Award No. URF/1/2162-01.

  16. Decomposition of some polynitro arenes initiated by heat and shock Part II: Several N-(2,4,6-trinitrophenyl)-substituted amino derivatives.

    PubMed

    Varga, Róbert; Zeman, Svatopluk; Kouba, Martin

    2006-10-11

    Samples of 2,4,6-trinitroaniline (PAM), 2,4,6-trinitro-N-(2,4,6-trinitrophenyl)aniline (DPA), N,N'-bis(2,4,6-trinitrophenyl)-3,5-dinitropyridine-2,6-diamine (PYX) and N,N',N''-tris(2,4,6-trinitrophenyl)-1,3,5-triazine-2,4,6-triamine (TPM) were exposed to heat or to shock and then analysed chromatographically (LC-UV and LC/MS). It was found that the main identified decomposition products of these two incomplete initiations are identical for each of the compounds studied. It has been stated that the chemical micro-mechanism of the primary fragmentations of their low-temperature decomposition should be the same as in the case of their initiation by shock, including fragmentation during their detonation transformation.

  17. Probabilistic progressive buckling of trusses

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Chamis, Christos C.

    1991-01-01

    A three-bay, space, cantilever truss is probabilistically evaluated to describe progressive buckling and truss collapse in view of the numerous uncertainties associated with the structural, material, and load variables (primitive variables) that describe the truss. Initially, the truss is deterministically analyzed for member forces, and member(s) in which the axial force exceeds the Euler buckling load are identified. These member(s) are then discretized with several intermediate nodes and a probabilistic buckling analysis is performed on the truss to obtain its probabilistic buckling loads and respective mode shapes. Furthermore, sensitivities associated with the uncertainties in the primitive variables are investigated, margin of safety values for the truss are determined, and truss end node displacements are noted. These steps are repeated by sequentially removing the buckled member(s) until onset of truss collapse is reached. Results show that this procedure yields an optimum truss configuration for a given loading and for a specified reliability.

  18. Probabilistic progressive buckling of trusses

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Chamis, Christos C.

    1994-01-01

    A three-bay, space, cantilever truss is probabilistically evaluated to describe progressive buckling and truss collapse in view of the numerous uncertainties associated with the structural, material, and load variables that describe the truss. Initially, the truss is deterministically analyzed for member forces, and members in which the axial force exceeds the Euler buckling load are identified. These members are then discretized with several intermediate nodes, and a probabilistic buckling analysis is performed on the truss to obtain its probabilistic buckling loads and the respective mode shapes. Furthermore, sensitivities associated with the uncertainties in the primitive variables are investigated, margin of safety values for the truss are determined, and truss end node displacements are noted. These steps are repeated by sequentially removing buckled members until onset of truss collapse is reached. Results show that this procedure yields an optimum truss configuration for a given loading and for a specified reliability.

  19. Unsteady triple-shock configurations and vortex contact structures initiated by the interaction of an energy source with a shock layer in gases

    NASA Astrophysics Data System (ADS)

    Azarova, O. A.; Gvozdeva, L. G.

    2016-08-01

    The effect of physical and chemical properties of the gaseous medium on the formation of triple Mach configurations and vortex contact structures and on the stagnation pressure and drag force dynamics has been studied for supersonic flows with external energy sources. For the ratio of specific heats that varies in a range of 1.1-1.4, a significant (up to 51.8%) difference has been obtained for the angles of triple-shock configurations in flows at Mach 4 past a cylindrically blunted plate. When studying the dynamics of the decreases in the stagnation pressure and drag force, it has been revealed that these effects are amplified and the vortex mechanism of drag reduction starts to prevail as the adiabatic index decreases.

  20. Decomposition of some polynitro arenes initiated by heat and shock Part I. 2,4,6-Trinitrotoluene.

    PubMed

    Varga, Róbert; Zeman, Svatopluk

    2006-05-20

    Samples of 2,4,6-trinitrotoluene (TNT) exposed to heat or to shock and residues after their detonation have been analyzed chromatographically (LC-UV and LC/MS). It was found that the main identified decomposition intermediates are identical in all the three cases. 4,6-Dinitro-2,1-benzoisoxazole and 2,4,6-trinitrobenzaldehyde are the most reactive from them. It has been stated that the chemical micro-mechanism of the primary fragmentations of shock-exposed TNT molecules and/or its detonation transformation should be the same as in the case of its low-temperature thermal decomposition.

  1. Cold/menthol TRPM8 receptors initiate the cold-shock response and protect germ cells from cold-shock–induced oxidation

    PubMed Central

    Borowiec, Anne-Sophie; Sion, Benoit; Chalmel, Frédéric; D. Rolland, Antoine; Lemonnier, Loïc; De Clerck, Tatiana; Bokhobza, Alexandre; Derouiche, Sandra; Dewailly, Etienne; Slomianny, Christian; Mauduit, Claire; Benahmed, Mohamed; Roudbaraki, Morad; Jégou, Bernard; Prevarskaya, Natalia; Bidaux, Gabriel

    2016-01-01

    Testes of most male mammals present the particularity of being externalized from the body and are consequently slightly cooler than core body temperature (4–8°C below). Although, hypothermia of the testis is known to increase germ cells apoptosis, little is known about the underlying molecular mechanisms, including cold sensors, transduction pathways, and apoptosis triggers. In this study, using a functional knockout mouse model of the cold and menthol receptors, dubbed transient receptor potential melastatine 8 (TRPM8) channels, we found that TRPM8 initiated the cold-shock response by differentially modulating cold- and heat-shock proteins. Besides, apoptosis of germ cells increased in proportion to the cooling level in control mice but was independent of temperature in knockout mice. We also observed that the rate of germ cell death correlated positively with the reactive oxygen species level and negatively with the expression of the detoxifying enzymes. This result suggests that the TRPM8 sensor is a key determinant of germ cell fate under hypothermic stimulation.—Borowiec, A.-S., Sion, B., Chalmel, F., Rolland, A. D., Lemonnier, L., De Clerck, T., Bokhobza, A., Derouiche, S., Dewailly, E., Slomianny, C., Mauduit, C., Benahmed, M., Roudbaraki, M., Jégou, B., Prevarskaya, N., Bidaux, G. Cold/menthol TRPM8 receptors initiate the cold-shock response and protect germ cells from cold-shock–induced oxidation. PMID:27317670

  2. A Numerical Model of CME Initiation and Shock Development for the 1998 May 2 Event: Implications for the Acceleration of GeV Protons

    NASA Astrophysics Data System (ADS)

    Roussev, I. I.; Sokolov, I. V.; Forbes, T. G.; Gombosi, T. I.; Lee, M. A.

    2004-05-01

    We present modeling results on the initiation and evolution of the coronal mass ejection which occurred on 1998 May 2 in NOAA AR8210. This is done within the framework of a global model of the solar magnetic field as it was observed by the Wilcox Solar Observatory. Our calculations are fully three-dimensional and involve compressible magnetohydrodynamics. We begin by first producing a steady-state solar wind for Carrington Rotation 1935/6. The solar eruption is initiated by slowly evolving the boundary conditions until a critical point is reached where the configuration loses mechanical equilibrium. As this point, the field erupts, and a flux rope is ejected away from the Sun, reaching a maximum speed in excess of 1,000 km/s. The shock that forms in front of the rope reaches a fast-mode Mach number in excess of 4 and a compression ratio greater than 3 by the time it has traveled a distance of 5 solar radii from the surface. Thus, by constructing a fully three-dimensional numerical model, which incorporates magnetic field data and a loss-of-equilibrium mechanism, we have been able to demonstrate that a shock can develop close to the Sun sufficiently strong to account for the energization of solar particles. For this event, diffusive-shock-acceleration theory predicts a distribution of solar energetic protons with a cut-off energy of about 10 GeV.

  3. Probabilistic cellular automata.

    PubMed

    Agapie, Alexandru; Andreica, Anca; Giuclea, Marius

    2014-09-01

    Cellular automata are binary lattices used for modeling complex dynamical systems. The automaton evolves iteratively from one configuration to another, using some local transition rule based on the number of ones in the neighborhood of each cell. With respect to the number of cells allowed to change per iteration, we speak of either synchronous or asynchronous automata. If randomness is involved to some degree in the transition rule, we speak of probabilistic automata, otherwise they are called deterministic. With either type of cellular automaton we are dealing with, the main theoretical challenge stays the same: starting from an arbitrary initial configuration, predict (with highest accuracy) the end configuration. If the automaton is deterministic, the outcome simplifies to one of two configurations, all zeros or all ones. If the automaton is probabilistic, the whole process is modeled by a finite homogeneous Markov chain, and the outcome is the corresponding stationary distribution. Based on our previous results for the asynchronous case-connecting the probability of a configuration in the stationary distribution to its number of zero-one borders-the article offers both numerical and theoretical insight into the long-term behavior of synchronous cellular automata.

  4. Shock initiation and detonation study on high concentration H2O2/H2O solutions using in-situ magnetic gauges

    SciTech Connect

    Sheffield, Stephen A; Dattelbaum, Dana M; Stahl, David B; Gibson, L Lee; Bartram, Brian D; Engelke, Ray

    2010-01-01

    Concentrated hydrogen peroxide (H{sub 2}O{sub 2}) has been known to detonate for many years. However, because of its reactivity and the difficulty in handling and confining it, along with the large critical diameter, few studies providing basic information about the initiation and detonation properties have been published. We are conducting a study to understand and quantify the initiation and detonation properties of highly concentrated H{sub 2}O{sub 2} using a gas-driven two-stage gun to produce well defined shock inputs. Multiple magnetic gauges are used to make in-situ measurements of the growth of reaction and subsequent detonation in the liquid. These experiments are designed to be one-dimensional to eliminate any difficulties that might be encountered with large critical diameters. Because of the concern of the reactivity of the H{sub 2}O{sub 2} with the confining materials, a remote loading system has been developed. The gun is pressurized, then the cell is filled and the experiment shot within less than three minutes. Several experiments have been completed on {approx}98 wt % H{sub 2}O{sub 2}/H{sub 2}O mixtures; homogeneous shock initiation behavior has been observed in the experiments where reaction is observed. The initial shock pressurizes and heats the mixture. After an induction time, a thermal explosion type reaction produces an evolving reactive wave that strengthens and eventually overdrives the first wave producing a detonation. From these experiments, we have determined unreacted Hugoniot points, times-to-detonation points that indicate low sensitivity (an input of 13.5 GPa produces detonation in 1 {micro}s compared to 9.5 GPa for neat nitromethane), and detonation velocities of high concentration H{sub 2}O{sub 2}/H{sub 2}O solutions of over 6.6 km/s.

  5. On the applicability of probabilistics

    SciTech Connect

    Roth, P.G.

    1996-12-31

    GEAE`s traditional lifing approach, based on Low Cycle Fatigue (LCF) curves, is evolving for fracture critical powder metal components by incorporating probabilistic fracture mechanics analysis. Supporting this move is a growing validation database which convincingly demonstrates that probabilistics work given the right inputs. Significant efforts are being made to ensure the right inputs. For example, Heavy Liquid Separation (HLS) analysis has been developed to quantify and control inclusion content (1). Also, an intensive seeded fatigue program providing a model for crack initiation at inclusions is ongoing (2). Despite the optimism and energy, probabilistics are only tools and have limitations. Designing to low failure probabilities helps provide protection, but other strategies are needed to protect against surprises. A low risk design limit derived from a predicted failure distribution can lead to a high risk deployment if there are unaccounted-for deviations from analysis assumptions. Recognized deviations which are statistically quantifiable can be integrated into the probabilistic analysis (an advantage of the approach). When deviations are known to be possible but are not properly describable statistically, it may be more appropriate to maintain the traditional position of conservatively bounding relevant input parameters. Finally, safety factors on analysis results may be called for in cases where there is little experience supporting new design concepts or material applications (where unrecognized deviations might be expected).

  6. Ignition and growth modeling of the shock initiation of PBX 9502 at -55°C and -196°C

    NASA Astrophysics Data System (ADS)

    Chidester, Steven K.; Tarver, Craig M.

    2017-01-01

    Gustavsen et al. reported the results of 26 shock initiation experiments using embedded particle velocity gauges on various lots of PBX 9502 (95% TATB/ 5% Kel-F binder) cooled to -55°C. A previously developed Ignition and Growth reactive flow model for -55°C PBX 9502 was compared to this newer data and was modified slightly. More recently, Hollowell et al. published similar data on PBX 9502 cooled to -196°C (+77K) with liquid nitrogen. An Ignition and Growth (I&G) model parameter set for -196°C PBX 9502 was developed and yielded good agreement with the measured shock initiation process and transition to detonation. Hollowell et al. also measured the interface particle velocity histories between the detonating PBX 9502 charges and various windows (PMMA, Kel-F, and LiF) placed at the rear PBX 9502 surfaces. This detonation data was accurately calculated using the -196°C PBX 9502 I&G parameters.

  7. Does an infrasonic acoustic shock wave resonance of the manganese 3+ loaded/copper depleted prion protein initiate the pathogenesis of TSE?

    PubMed

    Purdey, Mark

    2003-06-01

    Intensive exposures to natural and artificial sources of infrasonic acoustic shock (tectonic disturbances, supersonic aeroplanes, etc.) have been observed in ecosystems supporting mammalian populations that are blighted by clusters of traditional and new variant strains of transmissible spongiform encephalopathy (TSE). But TSEs will only emerge in those 'infrasound-rich' environments which are simultaneously influenced by eco-factors that induce a high manganese (Mn)/low copper (Cu)-zinc (Zn) ratio in brains of local mammalian populations. Since cellular prion protein (PrPc) is a cupro-protein expressed throughout the circadian mediated pathways of the body, it is proposed that PrP's Cu component performs a role in the conduction and distribution of endogenous electromagnetic energy; energy that has been transduced from incoming ultraviolet, acoustic, geomagnetic radiations. TSE pathogenesis is initiated once Mn substitutes at the vacant Cu domain on PrPc and forms a nonpathogenic, protease resistant, 'sleeping' prion. A second stage of pathogenesis comes into play once a low frequency wave of infrasonic shock metamorphoses the piezoelectric atomic structure of the Mn 3+ component of the prion, thereby 'priming' the sleeping prion into its fully fledged, pathogenic TSE isoform - where the paramagnetic status of the Mn 3+ atom is transformed into a stable ferrimagnetic lattice work, due to the strong electron-phonon coupling resulting from the dynamic 'Jahn-Teller' type distortions of the oxygen octahedra specific to the trivalent Mn species. The so called 'infectivity' of the prion is a misnomer and should be correctly defined as the contagious field inducing capacity of the ferrimagnetic Mn 3+ component of the prion; which remains pathogenic at all temperatures below the 'curie point'. A progressive domino-like 'metal to ligand to metal' ferrimagnetic corruption of the conduits of electromagnetic superexchange is initiated. The TSE diseased brain can be likened to

  8. Perception of Speech Reflects Optimal Use of Probabilistic Speech Cues

    ERIC Educational Resources Information Center

    Clayards, Meghan; Tanenhaus, Michael K.; Aslin, Richard N.; Jacobs, Robert A.

    2008-01-01

    Listeners are exquisitely sensitive to fine-grained acoustic detail within phonetic categories for sounds and words. Here we show that this sensitivity is optimal given the probabilistic nature of speech cues. We manipulated the probability distribution of one probabilistic cue, voice onset time (VOT), which differentiates word initial labial…

  9. [A case of anaphylactoid shock occurring immediately after the initiation of second intravenous administration of high-dose immunoglobulin (IVIg) in a patient with Crow-Fukase syndrome].

    PubMed

    Takahashi, Teruyuki; Ono, Shin-ichi; Ogawa, Katuhiko; Tamura, Masato; Mizutani, Tomohiko

    2003-06-01

    We report a case of anaphylactoid shock occurring immediately after the initiation of second intravenous administration of high-dose immunoglobulin (IVIg) in a patient with Crow-Fukase syndrome. The patient was a 57-year-old woman, who was admitted to our hospital because of numbness and muscle weakness in the four extremities, difficulty in walking, and foot edema. On admission, her skin was dry and rough, and also showing scattered pigmentation, small hemangiomas, and hypertrichosis in both legs. She had distal dominant muscle weakness, more prominent in her legs, and was not able to walk. Deep tendon reflexes in her four extremities were markedly diminished or absent. She had a glove and stocking type of paresthesia, severe impairment of vibration, and absence of joint position sensation in her four extremities. On laboratory data, serum vascular endothelial growth factor (VEGF) was markedly elevated to 5,184 pg/ml (normal: below 220 pg/ml). Cerebrospinal fluid examination revealed cell counts of 2/microliter and protein level of 114 mg/dl. Abdominal echo showed marked hepatosplenomegaly. On peripheral nerve conduction study, both motor and sensory conduction velocity were undetectable in her legs. We diagnosed her condition as Crow-Fukase syndrome, and started IVIg of polyethyleneglycol-treated gamma-globulin (PEG-glob) at 400 mg/kg/day for 5 consecutive days for polyneuropathy. Since the first IVIg mildly improved muscle weakness, we tried the second IVIg of PEG-glob. However, immediately after the initiation of second IVIg of PEG-glob, she developed hypotention, dyspnea, cold sweating, cyanosis, and became lethargic. We immediately stopped IVIg and started first-aid treatment with epinephrine and corticosteroid for these symptoms. This treatment was successful and the patient fully recovered without any sequelae. Since serum IgE level remained unchanged and lymphocyte stimulation test (LST) was positive against the same rot number of PEG-glob, we diagnosed

  10. Saline-expanded group O uncrossmatched packed red blood cells as an initial resuscitation fluid in severe shock.

    PubMed

    Schwab, C W; Civil, I; Shayne, J P

    1986-11-01

    Despite an excellent military experience with the use of the "universal donor" as an immediately available blood component, considerable reluctance to use uncrossmatched Group O packed cells (TOB) remains. In addition, problems continue with rapid blood acquisition in the emergency department. To study the safety of TOB used as an immediate resuscitation component, a 30-month prospective study of all patients arriving at a single trauma unit was undertaken. By protocol TOB (O-, female; O+, male) was delivered to the shock room prior to patient arrival and was expanded to 500 mL by adding 250 mL prewarmed saline (39.4 C) to the existing RBC unit. Transfusion was ordered on clinical signs of Class III or Class IV hemorrhage. Ninety-nine patients entered the protocol, receiving a total of 1,136 units of blood (11.5 units/patient). Four hundred ten units (4.1 units/patient) of uncrossmatched blood were administered on patient arrival--322 units of TOB and 88 units of type-specific blood (TSB). Seven patients (7.4%) had prior transfusions, and 14 (58%) women had prior pregnancies. Complications included disseminated intravascular coagulation, 12%; adult respiratory distress syndrome, 8%; and hepatitis, 1%. Forty-nine patients (49%) required massive transfusion (greater than 10 units/24 hr). All patients were followed clinically and by the blood bank for any signs of transfusion reactions or incompatibility throughout their hospital courses; none developed. There were no deaths related to transfusion incompatibility. We conclude that TOB used as an immediate resuscitative blood component is safe.(ABSTRACT TRUNCATED AT 250 WORDS)

  11. Initial stage of motion in the Lavrent'ev-Ishlinskii problem on longitudinal shock on a rod

    NASA Astrophysics Data System (ADS)

    Morozov, N. F.; Belyaev, A. K.; Tovstik, P. E.; Tovstik, T. P.

    2015-11-01

    The transverse motion of a thin rod under a sudden application of a prolonged longitudinal load at the initial stage of motion is considered. The introduction of self-similar variables makes it possible to propose a description of the transverse motion weakly dependent on the longitudinal deformation. Both single dents and periodic systems of dents are considered.

  12. 77 FR 58590 - Determining Technical Adequacy of Probabilistic Risk Assessment for Risk-Informed License...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-21

    ... COMMISSION Determining Technical Adequacy of Probabilistic Risk Assessment for Risk-Informed License...: LWR Edition,'' Section 19.1, ``Determining the Technical Adequacy of Probabilistic Risk Assessment for... Probabilistic Risk Assessment for Risk-Informed License Amendment Requests After Initial Fuel Load''...

  13. Radiative Shock Waves In Emerging Shocks

    NASA Astrophysics Data System (ADS)

    Drake, R. Paul; Doss, F.; Visco, A.

    2011-05-01

    In laboratory experiments we produce radiative shock waves having dense, thin shells. These shocks are similar to shocks emerging from optically thick environments in astrophysics in that they are strongly radiative with optically thick shocked layers and optically thin or intermediate downstream layers through which radiation readily escapes. Examples include shocks breaking out of a Type II supernova (SN) and the radiative reverse shock during the early phases of the SN remnant produced by a red supergiant star. We produce these shocks by driving a low-Z plasma piston (Be) at > 100 km/s into Xe gas at 1.1 atm. pressure. The shocked Xe collapses to > 20 times its initial density. Measurements of structure by radiography and temperature by several methods confirm that the shock wave is strongly radiative. We observe small-scale perturbations in the post-shock layer, modulating the shock and material interfaces. We describe a variation of the Vishniac instability theory of decelerating shocks and an analysis of associated scaling relations to account for the growth of these perturbations, identify how they scale to astrophysical systems such as SN 1993J, and consider possible future experiments. Collaborators in this work have included H.F. Robey, J.P. Hughes, C.C. Kuranz, C.M. Huntington, S.H. Glenzer, T. Doeppner, D.H. Froula, M.J. Grosskopf, and D.C. Marion ________________________________ * Supported by the US DOE NNSA under the Predictive Sci. Academic Alliance Program by grant DE-FC52-08NA28616, the Stewardship Sci. Academic Alliances program by grant DE-FG52-04NA00064, and the Nat. Laser User Facility by grant DE-FG03-00SF22021.

  14. Blueberry shock virus

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Blueberry shock disease first observed in Washington state in 1987 and initially confused with blueberry scorch caused by Blueberry scorch virus (BlScV). However, shock affected plants produced a second flush of leaves after flowering and the plants appeared normal by late summer except for the lac...

  15. Shock Detector for SURF model

    SciTech Connect

    Menikoff, Ralph

    2016-01-11

    SURF and its extension SURFplus are reactive burn models aimed at shock initiation and propagation of detonation waves in high explosives. A distinctive feature of these models is that the burn rate depends on the lead shock pressure. A key part of the models is an algorithm to detect the lead shock. Typically, shock capturing hydro algorithms have small oscillations behind a shock. Here we investigate how well the shock detection algorithm works for a nearly steady propagating detonation wave in one-dimension using the Eulerian xRage code.

  16. Structure in Radiating Shocks

    NASA Astrophysics Data System (ADS)

    Doss, Forrest

    2010-11-01

    The basic radiative shock experiment is a shock launched into a gas of high-atomic-number material at high velocities, which fulfills the conditions for radiative losses to collapse the post-shock material to over 20 times the initial gas density. This has been accomplished using the OMEGA Laser Facility by illuminating a Be ablator for 1 ns with a total of 4 kJ, launching the requisite shock, faster than 100 km/sec, into a polyimide shock tube filled with Xe. The experiments have lateral dimensions of 600 μm and axial dimensions of 2-3 mm, and are diagnosed by x-ray backlighting. Repeatable structure beyond the one-dimensional picture of a shock as a planar discontinuity was discovered in the experimental data. One form this took was that of radial boundary effects near the tube walls, extended approximately seventy microns into the system. The cause of this effect - low density wall material which is heated by radiation transport ahead of the shock, launching a new converging shock ahead of the main shock - is apparently unique to high-energy-density experiments. Another form of structure is the appearance of small-scale perturbations in the post-shock layer, modulating the shock and material interfaces and creating regions of enhanced and diminished aerial density within the layer. The authors have applied an instability theory, a variation of the Vishniac instability of decelerating shocks, to describe the growth of these perturbations. We have also applied Bayesian statistical methods to better understand the uncertainties associated with measuring shocked layer thickness in the presence of tilt. Collaborators: R. P. Drake, H. F. Robey, C. C. Kuranz, C. M. Huntington, M. J. Grosskopf, D. C. Marion.

  17. Comparison of Dawn and Dusk Precipitating Electron Energy Populations Shortly After the Initial Shock for the January 10th, 1997 Magnetic Cloud

    NASA Technical Reports Server (NTRS)

    Spann, J.; Germany, G.; Swift, W.; Parks, G.; Brittnacher, M.; Elsen, R.

    1997-01-01

    The observed precipitating electron energy between 0130 UT and 0400 UT of January 10 th, 1997, indicates that there is a more energetic precipitating electron population that appears in the auroral oval at 1800-2200 UT at 030) UT. This increase in energy occurs after the initial shock of the magnetic cloud reaches the Earth (0114 UT) and after faint but dynamic polar cap precipitation has been cleared out. The more energetic population is observed to remain rather constant in MLT through the onset of auroral activity (0330 UT) and to the end of the Polar spacecraft apogee pass. Data from the Ultraviolet Imager LBH long and LBH short images are used to quantify the average energy of the precipitating auroral electrons. The Wind spacecraft located about 100 RE upstream monitored the IMF and plasma parameters during the passing of the cloud. The affects of oblique angle viewing are included in the analysis. Suggestions as to the source of this hot electron population will be presented.

  18. Recent developments of the NESSUS probabilistic structural analysis computer program

    NASA Technical Reports Server (NTRS)

    Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.

    1992-01-01

    The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.

  19. Probabilistic Composite Design

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1997-01-01

    Probabilistic composite design is described in terms of a computational simulation. This simulation tracks probabilistically the composite design evolution from constituent materials, fabrication process, through composite mechanics and structural components. Comparisons with experimental data are provided to illustrate selection of probabilistic design allowables, test methods/specimen guidelines, and identification of in situ versus pristine strength, For example, results show that: in situ fiber tensile strength is 90% of its pristine strength; flat-wise long-tapered specimens are most suitable for setting ply tensile strength allowables: a composite radome can be designed with a reliability of 0.999999; and laminate fatigue exhibits wide-spread scatter at 90% cyclic-stress to static-strength ratios.

  20. Formalizing Probabilistic Safety Claims

    NASA Technical Reports Server (NTRS)

    Herencia-Zapana, Heber; Hagen, George E.; Narkawicz, Anthony J.

    2011-01-01

    A safety claim for a system is a statement that the system, which is subject to hazardous conditions, satisfies a given set of properties. Following work by John Rushby and Bev Littlewood, this paper presents a mathematical framework that can be used to state and formally prove probabilistic safety claims. It also enables hazardous conditions, their uncertainties, and their interactions to be integrated into the safety claim. This framework provides a formal description of the probabilistic composition of an arbitrary number of hazardous conditions and their effects on system behavior. An example is given of a probabilistic safety claim for a conflict detection algorithm for aircraft in a 2D airspace. The motivation for developing this mathematical framework is that it can be used in an automated theorem prover to formally verify safety claims.

  1. Device Strategies for Patients in INTERMACS Profiles 1 and 2 Cardiogenic Shock: Double Bridge With Extracorporeal Membrane Oxygenation and Initial Implant of More Durable Devices.

    PubMed

    Cheng, Richard; Ramzy, Danny; Azarbal, Babak; Arabia, Francisco A; Esmailian, Fardad; Czer, Lawrence S; Kobashigawa, Jon A; Moriguchi, Jaime D

    2017-03-01

    For Interagency Registry for Mechanically Assisted Circulatory Support profiles 1 and 2 cardiogenic shock patients initially placed on extracorporeal membrane oxygenation (ECMO), whether crossover to more durable devices is associated with increased survival, and its optimal timing, are not established. Profiles 1 and 2 patients placed on mechanical support were prospectively registered. Survival and successful hospital discharge were compared between patients placed on ECMO only, ECMO with early crossover, and ECMO with delayed crossover. Survival of patients directly implanted with non-ECMO devices was also reported. One-hundred and sixty-two patients were included. Mean age was 52.2 ± 13.8 years. Seventy-three of 162 (45.1%) were initiated on ECMO. Of these, 43 were supported with ECMO only, 11 were crossed-over early <4 days, and 19 were crossed-over in a delayed fashion. Survival was different across groups (Log-rank P < 0.002). In multivariate analysis, early crossover was associated with decreased mortality as compared with no crossover (hazard ratio [HR] 0.201, 95% confidence interval [95%CI] 0.058-0.697, P = 0.011) or with delayed crossover (HR 0.255, 95%CI 0.073-0.894, P = 0.033). Mortality was not different between delayed crossover and no crossover (P = 0.473). In patients with early crossover there were no deaths at 30 days, and 60-day survival was 90.0 ± 9.5%. Survival to hospital discharge was 72.8%. For patients directly implanted with non-ECMO devices, 30-day and 60-day survival was 90.9 ± 3.1% and 87.3 ± 3.8%, respectively, and survival to hospital discharge was 78.7%. Both initial implant of durable devices and double bridge strategy was associated with improved outcomes. If the double bridge strategy is chosen, early crossover is associated with improved survival and successful hospital discharge.

  2. Probabilistic composite micromechanics

    NASA Technical Reports Server (NTRS)

    Stock, T. A.; Bellini, P. X.; Murthy, P. L. N.; Chamis, C. C.

    1988-01-01

    Probabilistic composite micromechanics methods are developed that simulate expected uncertainties in unidirectional fiber composite properties. These methods are in the form of computational procedures using Monte Carlo simulation. A graphite/epoxy unidirectional composite (ply) is studied to demonstrate fiber composite material properties at the micro level. Regression results are presented to show the relative correlation between predicted and response variables in the study.

  3. Probabilistic Causation without Probability.

    ERIC Educational Resources Information Center

    Holland, Paul W.

    The failure of Hume's "constant conjunction" to describe apparently causal relations in science and everyday life has led to various "probabilistic" theories of causation of which the study by P. C. Suppes (1970) is an important example. A formal model that was developed for the analysis of comparative agricultural experiments…

  4. [Historical vision of shock].

    PubMed

    Dosne Pasqualini, C

    1998-01-01

    The concept of shock and its close relationship with that of stress dates back to the experiments of Hans Selye initiated in 1936 at McGill University in Montreal, with whom I collaborated between 1939 and 1942. It was demonstrated that the General Adaptation Syndrome begins with an Alarm Reaction, which consists of a Stage of Shock and one of Counter-Shock, followed by a Stage of Adaptation and finally a Stage of Exhaustion. My Ph.D. thesis concluded that shock was due to an adrenal insufficiency postulating that active metabolic processes drain the body of certain essential compounds the lack of which causes shock. My interest in the role of the glucose metabolism in shock led me to work with Bernardo Houssay in 1942 at the Institute of Physiology of the University of Buenos Aires and in 1944 with C.N.H. Long at Yale University. There I developed a method for the induction of hemorrhagic shock in the guinea pig with 94% lethality; curiously, the administration of 200 mg of ascorbic acid prevented death. Upon my return to Buenos Aires, these results were confirmed and moreover, it was demonstrated that the administration of cortisone led to 40% survival of the animals while desoxycorticosterone had no effect. At the time, no explanation was available but to-day, half a century later, this Symposium should be able to explain the mechanisms leading to death by hemorrhagic shock.

  5. Early Treatment in Shock

    DTIC Science & Technology

    2007-06-01

    1471–2210/2/7. Accessed April 15, 2005. 20. Wang CJ, Lee MJ, Chang MC, Lin JK. Inhibition of tumor promotion in benzo [ a ] pyrene -initiated CD-1 mouse...model. Deliverable: A panel of genes that are reproducibly altered in white blood cells and in liver and muscle by shock and resuscitation. 1. To...Deliverable: Coordinated with objective #1, A panel of genes that are reproducibly altered in white blood cells and in liver and muscle by shock and

  6. Probabilistic authenticated quantum dialogue

    NASA Astrophysics Data System (ADS)

    Hwang, Tzonelih; Luo, Yi-Ping

    2015-12-01

    This work proposes a probabilistic authenticated quantum dialogue (PAQD) based on Bell states with the following notable features. (1) In our proposed scheme, the dialogue is encoded in a probabilistic way, i.e., the same messages can be encoded into different quantum states, whereas in the state-of-the-art authenticated quantum dialogue (AQD), the dialogue is encoded in a deterministic way; (2) the pre-shared secret key between two communicants can be reused without any security loophole; (3) each dialogue in the proposed PAQD can be exchanged within only one-step quantum communication and one-step classical communication. However, in the state-of-the-art AQD protocols, both communicants have to run a QKD protocol for each dialogue and each dialogue requires multiple quantum as well as classical communicational steps; (4) nevertheless, the proposed scheme can resist the man-in-the-middle attack, the modification attack, and even other well-known attacks.

  7. Probabilistic Fatigue: Computational Simulation

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2002-01-01

    Fatigue is a primary consideration in the design of aerospace structures for long term durability and reliability. There are several types of fatigue that must be considered in the design. These include low cycle, high cycle, combined for different cyclic loading conditions - for example, mechanical, thermal, erosion, etc. The traditional approach to evaluate fatigue has been to conduct many tests in the various service-environment conditions that the component will be subjected to in a specific design. This approach is reasonable and robust for that specific design. However, it is time consuming, costly and needs to be repeated for designs in different operating conditions in general. Recent research has demonstrated that fatigue of structural components/structures can be evaluated by computational simulation based on a novel paradigm. Main features in this novel paradigm are progressive telescoping scale mechanics, progressive scale substructuring and progressive structural fracture, encompassed with probabilistic simulation. These generic features of this approach are to probabilistically telescope scale local material point damage all the way up to the structural component and to probabilistically scale decompose structural loads and boundary conditions all the way down to material point. Additional features include a multifactor interaction model that probabilistically describes material properties evolution, any changes due to various cyclic load and other mutually interacting effects. The objective of the proposed paper is to describe this novel paradigm of computational simulation and present typical fatigue results for structural components. Additionally, advantages, versatility and inclusiveness of computational simulation versus testing are discussed. Guidelines for complementing simulated results with strategic testing are outlined. Typical results are shown for computational simulation of fatigue in metallic composite structures to demonstrate the

  8. Geothermal probabilistic cost study

    NASA Astrophysics Data System (ADS)

    Orren, L. H.; Ziman, G. M.; Jones, S. C.; Lee, T. K.; Noll, R.; Wilde, L.; Sadanand, V.

    1981-08-01

    A tool is presented to quantify the risks of geothermal projects, the Geothermal Probabilistic Cost Model (GPCM). The GPCM model was used to evaluate a geothermal reservoir for a binary-cycle electric plant at Heber, California. Three institutional aspects of the geothermal risk which can shift the risk among different agents was analyzed. The leasing of geothermal land, contracting between the producer and the user of the geothermal heat, and insurance against faulty performance were examined.

  9. Probabilistic Model Development

    NASA Technical Reports Server (NTRS)

    Adam, James H., Jr.

    2010-01-01

    Objective: Develop a Probabilistic Model for the Solar Energetic Particle Environment. Develop a tool to provide a reference solar particle radiation environment that: 1) Will not be exceeded at a user-specified confidence level; 2) Will provide reference environments for: a) Peak flux; b) Event-integrated fluence; and c) Mission-integrated fluence. The reference environments will consist of: a) Elemental energy spectra; b) For protons, helium and heavier ions.

  10. Geothermal probabilistic cost study

    NASA Technical Reports Server (NTRS)

    Orren, L. H.; Ziman, G. M.; Jones, S. C.; Lee, T. K.; Noll, R.; Wilde, L.; Sadanand, V.

    1981-01-01

    A tool is presented to quantify the risks of geothermal projects, the Geothermal Probabilistic Cost Model (GPCM). The GPCM model was used to evaluate a geothermal reservoir for a binary-cycle electric plant at Heber, California. Three institutional aspects of the geothermal risk which can shift the risk among different agents was analyzed. The leasing of geothermal land, contracting between the producer and the user of the geothermal heat, and insurance against faulty performance were examined.

  11. Probabilistic Resilience in Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Panerati, Jacopo; Beltrame, Giovanni; Schwind, Nicolas; Zeltner, Stefan; Inoue, Katsumi

    2016-05-01

    Originally defined in the context of ecological systems and environmental sciences, resilience has grown to be a property of major interest for the design and analysis of many other complex systems: resilient networks and robotics systems other the desirable capability of absorbing disruption and transforming in response to external shocks, while still providing the services they were designed for. Starting from an existing formalization of resilience for constraint-based systems, we develop a probabilistic framework based on hidden Markov models. In doing so, we introduce two new important features: stochastic evolution and partial observability. Using our framework, we formalize a methodology for the evaluation of probabilities associated with generic properties, we describe an efficient algorithm for the computation of its essential inference step, and show that its complexity is comparable to other state-of-the-art inference algorithms.

  12. Echocardiography in shock management.

    PubMed

    McLean, Anthony S

    2016-08-20

    Echocardiography is pivotal in the diagnosis and management of the shocked patient. Important characteristics in the setting of shock are that it is non-invasive and can be rapidly applied.In the acute situation a basic study often yields immediate results allowing for the initiation of therapy, while a follow-up advanced study brings the advantage of further refining the diagnosis and providing an in-depth hemodynamic assessment. Competency in basic critical care echocardiography is now regarded as a mandatory part of critical care training with clear guidelines available. The majority of pathologies found in shocked patients are readily identified using basic level 2D and M-mode echocardiography. A more comprehensive diagnosis can be achieved with advanced levels of competency, for which practice guidelines are also now available. Hemodynamic evaluation and ongoing monitoring are possible with advanced levels of competency, which includes the use of colour Doppler, spectral Doppler, and tissue Doppler imaging and occasionally the use of more recent technological advances such as 3D or speckled tracking.The four core types of shock-cardiogenic, hypovolemic, obstructive, and vasoplegic-can readily be identified by echocardiography. Even within each of the main headings contained in the shock classification, a variety of pathologies may be the cause and echocardiography will differentiate which of these is responsible. Increasingly, as a result of more complex and elderly patients, the shock may be multifactorial, such as a combination of cardiogenic and septic shock or hypovolemia and ventricular outflow obstruction.The diagnostic benefit of echocardiography in the shocked patient is obvious. The increasing prevalence of critical care physicians experienced in advanced techniques means echocardiography often supplants the need for more invasive hemodynamic assessment and monitoring in shock.

  13. Topics in Probabilistic Judgment Aggregation

    ERIC Educational Resources Information Center

    Wang, Guanchun

    2011-01-01

    This dissertation is a compilation of several studies that are united by their relevance to probabilistic judgment aggregation. In the face of complex and uncertain events, panels of judges are frequently consulted to provide probabilistic forecasts, and aggregation of such estimates in groups often yield better results than could have been made…

  14. Time Analysis for Probabilistic Workflows

    SciTech Connect

    Czejdo, Bogdan; Ferragut, Erik M

    2012-01-01

    There are many theoretical and practical results in the area of workflow modeling, especially when the more formal workflows are used. In this paper we focus on probabilistic workflows. We show algorithms for time computations in probabilistic workflows. With time of activities more precisely modeled, we can achieve improvement in the work cooperation and analyses of cooperation including simulation and visualization.

  15. Quantum probabilistic logic programming

    NASA Astrophysics Data System (ADS)

    Balu, Radhakrishnan

    2015-05-01

    We describe a quantum mechanics based logic programming language that supports Horn clauses, random variables, and covariance matrices to express and solve problems in probabilistic logic. The Horn clauses of the language wrap random variables, including infinite valued, to express probability distributions and statistical correlations, a powerful feature to capture relationship between distributions that are not independent. The expressive power of the language is based on a mechanism to implement statistical ensembles and to solve the underlying SAT instances using quantum mechanical machinery. We exploit the fact that classical random variables have quantum decompositions to build the Horn clauses. We establish the semantics of the language in a rigorous fashion by considering an existing probabilistic logic language called PRISM with classical probability measures defined on the Herbrand base and extending it to the quantum context. In the classical case H-interpretations form the sample space and probability measures defined on them lead to consistent definition of probabilities for well formed formulae. In the quantum counterpart, we define probability amplitudes on Hinterpretations facilitating the model generations and verifications via quantum mechanical superpositions and entanglements. We cast the well formed formulae of the language as quantum mechanical observables thus providing an elegant interpretation for their probabilities. We discuss several examples to combine statistical ensembles and predicates of first order logic to reason with situations involving uncertainty.

  16. Is the basic conditional probabilistic?

    PubMed

    Goodwin, Geoffrey P

    2014-06-01

    Nine experiments examined whether individuals treat the meaning of basic conditional assertions as deterministic or probabilistic. In Experiments 1-4, participants were presented with either probabilistic or deterministic relations, which they had to describe with a conditional. These experiments consistently showed that people tend only to use the basic if p then q construction to describe deterministic relations between antecedent and consequent, whereas they use a probabilistically qualified construction, if p then probably q, to describe probabilistic relations-suggesting that the default interpretation of the conditional is deterministic. Experiments 5 and 6 showed that when directly asked, individuals typically report that conditional assertions admit no exceptions (i.e., they are seen as deterministic). Experiments 7-9 showed that individuals judge the truth of conditional assertions in accordance with this deterministic interpretation. Together, these results pose a challenge to probabilistic accounts of the meaning of conditionals and support mental models, formal rules, and suppositional accounts.

  17. Probabilistic Structural Analysis Theory Development

    NASA Technical Reports Server (NTRS)

    Burnside, O. H.

    1985-01-01

    The objective of the Probabilistic Structural Analysis Methods (PSAM) project is to develop analysis techniques and computer programs for predicting the probabilistic response of critical structural components for current and future space propulsion systems. This technology will play a central role in establishing system performance and durability. The first year's technical activity is concentrating on probabilistic finite element formulation strategy and code development. Work is also in progress to survey critical materials and space shuttle mian engine components. The probabilistic finite element computer program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) is being developed. The final probabilistic code will have, in the general case, the capability of performing nonlinear dynamic of stochastic structures. It is the goal of the approximate methods effort to increase problem solving efficiency relative to finite element methods by using energy methods to generate trial solutions which satisfy the structural boundary conditions. These approximate methods will be less computer intensive relative to the finite element approach.

  18. TIMING OF SHOCK WAVES

    DOEpatents

    Tuck, J.L.

    1955-03-01

    This patent relates to means for ascertaining the instant of arrival of a shock wave in an exploslve charge and apparatus utilizing this means to coordinate the timing of two operations involving a short lnterval of time. A pair of spaced electrodes are inserted along the line of an explosive train with a voltage applied there-across which is insufficient to cause discharge. When it is desired to initiate operation of a device at the time the explosive shock wave reaches a particular point on the explosive line, the device having an inherent time delay, the electrodes are located ahead of the point such that the ionization of the area between the electrodes caused by the traveling explosive shock wave sends a signal to initiate operation of the device to cause it to operate at the proper time. The operated device may be photographic equipment consisting of an x-ray illuminating tube.

  19. Shock Surface Undulation and Particle Acceleration at Oblique Shocks

    NASA Astrophysics Data System (ADS)

    Krauss-Varban, D.; Li, Y.; Luhmann, J. G.

    2006-12-01

    Considering the average Parker spiral magnetic field configuration, CME-driven interplanetary (IP) shocks within 1 AU should have oblique portions over much of their domain. Indeed, CME-driven shocks observed close to Earth are often oblique. However, it is well known that the standard diffusive shock acceleration mechanism, which relies on self-consistent wave generation via upstream propagating ions and their scattering, becomes increasingly inefficient with greater shock normal angle. Not only is a higher threshold energy required for the ions to leave the shock upstream, but also, approximately-parallel propagating waves are more quickly convected back into the shock, and the growth rate for waves propagating normal to the shock (the ones with the largest convective growth) decreases. As a result, typical, small-scale hybrid simulations of oblique shocks only show a dilute upstream beam, similar to what is often observed at the oblique Earth's bow shock - and no scattered, highly-energized ions. On the other hand, there are many "energetic storm particle" (ESP) events associated with oblique shocks that have significant fluxes of energetic ions. Recently, we have found that when run for a long time, our hybrid simulations (kinetic ions, electron fluid) show that the initial, weak beam is sufficient to generate compressive, steepening upstream waves. These waves are capable of disturbing the shock surface, resulting in an undulation that is propagating along the surface and growing in amplitude over time. The process is akin to that of the well-known reformation occurring at sufficiently strong quasi-parallel shocks. However, here the perturbations require at least two dimensions, show a strong spatial correlation, and travel along the shock surface. This process not only leads to enhanced ion acceleration, but also means that the shock characteristics are difficult to pinpoint, observationally: both the local jumps and the shock normal angle are highly variable

  20. Detonation Shock Radius Experiments.

    NASA Astrophysics Data System (ADS)

    Lambert, David; Debes, Joshua; Stewart, Scott; Yoo, Sunhee

    2007-06-01

    A previous passover experiment [1] was designed to create a complex detonation transient used in validating a reduced, asymptotically derived description of detonation shock dynamics (DSD). An underlying question remained on determining the location of the initial detonation shock radius to start the DSD simulation with respect to the dynamical response of the initiation system coupling's to the main charge. This paper concentrates on determining the initial shock radius required of such DSD governed problems. `Cut-back' experiments of PBX-9501 were conducted using an initiation system that sought to optimize the transferred detonation to the desired constant radius, hemispherical shape. Streak camera techniques captured the breakout on three of the prism's surfaces for time-of-arrival data. The paper includes comparisons to simulations using constant volume explosion and high pressure hot spots. The results of the experiments and simulation efforts provide fundamental design considerations for actual explosive systems and verify necessary conditions from which the asymptotic theory of DSD may apply. [1] Lambert, D., Stewart, D. Scott and Yoo, S. and Wescott, B., ``Experimental Validation of Detonation Shock Dynamics in Condensed Explosives. J. of Fluid Mechs., Vol. 546, pp.227-253 (2006).

  1. Probabilistic retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Wu, Chang-Hua; Agam, Gady

    2007-03-01

    Optic fundus assessment is widely used for diagnosing vascular and non-vascular pathology. Inspection of the retinal vasculature may reveal hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. Due to various imaging conditions retinal images may be degraded. Consequently, the enhancement of such images and vessels in them is an important task with direct clinical applications. We propose a novel technique for vessel enhancement in retinal images that is capable of enhancing vessel junctions in addition to linear vessel segments. This is an extension of vessel filters we have previously developed for vessel enhancement in thoracic CT scans. The proposed approach is based on probabilistic models which can discern vessels and junctions. Evaluation shows the proposed filter is better than several known techniques and is comparable to the state of the art when evaluated on a standard dataset. A ridge-based vessel tracking process is applied on the enhanced image to demonstrate the effectiveness of the enhancement filter.

  2. Probabilistic Fiber Composite Micromechanics

    NASA Technical Reports Server (NTRS)

    Stock, Thomas A.

    1996-01-01

    Probabilistic composite micromechanics methods are developed that simulate expected uncertainties in unidirectional fiber composite properties. These methods are in the form of computational procedures using Monte Carlo simulation. The variables in which uncertainties are accounted for include constituent and void volume ratios, constituent elastic properties and strengths, and fiber misalignment. A graphite/epoxy unidirectional composite (ply) is studied to demonstrate fiber composite material property variations induced by random changes expected at the material micro level. Regression results are presented to show the relative correlation between predictor and response variables in the study. These computational procedures make possible a formal description of anticipated random processes at the intra-ply level, and the related effects of these on composite properties.

  3. Novel probabilistic neuroclassifier

    NASA Astrophysics Data System (ADS)

    Hong, Jiang; Serpen, Gursel

    2003-09-01

    A novel probabilistic potential function neural network classifier algorithm to deal with classes which are multi-modally distributed and formed from sets of disjoint pattern clusters is proposed in this paper. The proposed classifier has a number of desirable properties which distinguish it from other neural network classifiers. A complete description of the algorithm in terms of its architecture and the pseudocode is presented. Simulation analysis of the newly proposed neuro-classifier algorithm on a set of benchmark problems is presented. Benchmark problems tested include IRIS, Sonar, Vowel Recognition, Two-Spiral, Wisconsin Breast Cancer, Cleveland Heart Disease and Thyroid Gland Disease. Simulation results indicate that the proposed neuro-classifier performs consistently better for a subset of problems for which other neural classifiers perform relatively poorly.

  4. Probabilistic Mesomechanical Fatigue Model

    NASA Technical Reports Server (NTRS)

    Tryon, Robert G.

    1997-01-01

    A probabilistic mesomechanical fatigue life model is proposed to link the microstructural material heterogeneities to the statistical scatter in the macrostructural response. The macrostructure is modeled as an ensemble of microelements. Cracks nucleation within the microelements and grow from the microelements to final fracture. Variations of the microelement properties are defined using statistical parameters. A micromechanical slip band decohesion model is used to determine the crack nucleation life and size. A crack tip opening displacement model is used to determine the small crack growth life and size. Paris law is used to determine the long crack growth life. The models are combined in a Monte Carlo simulation to determine the statistical distribution of total fatigue life for the macrostructure. The modeled response is compared to trends in experimental observations from the literature.

  5. Incorporating psychological influences in probabilistic cost analysis

    SciTech Connect

    Kujawski, Edouard; Alvaro, Mariana; Edwards, William

    2004-01-08

    Today's typical probabilistic cost analysis assumes an ''ideal'' project that is devoid of the human and organizational considerations that heavily influence the success and cost of real-world projects. In the real world ''Money Allocated Is Money Spent'' (MAIMS principle); cost underruns are rarely available to protect against cost overruns while task overruns are passed on to the total project cost. Realistic cost estimates therefore require a modified probabilistic cost analysis that simultaneously models the cost management strategy including budget allocation. Psychological influences such as overconfidence in assessing uncertainties and dependencies among cost elements and risks are other important considerations that are generally not addressed. It should then be no surprise that actual project costs often exceed the initial estimates and are delivered late and/or with a reduced scope. This paper presents a practical probabilistic cost analysis model that incorporates recent findings in human behavior and judgment under uncertainty, dependencies among cost elements, the MAIMS principle, and project management practices. Uncertain cost elements are elicited from experts using the direct fractile assessment method and fitted with three-parameter Weibull distributions. The full correlation matrix is specified in terms of two parameters that characterize correlations among cost elements in the same and in different subsystems. The analysis is readily implemented using standard Monte Carlo simulation tools such as {at}Risk and Crystal Ball{reg_sign}. The analysis of a representative design and engineering project substantiates that today's typical probabilistic cost analysis is likely to severely underestimate project cost for probability of success values of importance to contractors and procuring activities. The proposed approach provides a framework for developing a viable cost management strategy for allocating baseline budgets and contingencies. Given the

  6. Theoretical Insight into Shocked Gases

    SciTech Connect

    Leiding, Jeffery Allen

    2016-09-29

    I present the results of statistical mechanical calculations on shocked molecular gases. This work provides insight into the general behavior of shock Hugoniots of gas phase molecular targets with varying initial pressures. The dissociation behavior of the molecules is emphasized. Impedance matching calculations are performed to determine the maximum degree of dissociation accessible for a given flyer velocity as a function of initial gas pressure.

  7. Probabilistic brains: knowns and unknowns

    PubMed Central

    Pouget, Alexandre; Beck, Jeffrey M; Ma, Wei Ji; Latham, Peter E

    2015-01-01

    There is strong behavioral and physiological evidence that the brain both represents probability distributions and performs probabilistic inference. Computational neuroscientists have started to shed light on how these probabilistic representations and computations might be implemented in neural circuits. One particularly appealing aspect of these theories is their generality: they can be used to model a wide range of tasks, from sensory processing to high-level cognition. To date, however, these theories have only been applied to very simple tasks. Here we discuss the challenges that will emerge as researchers start focusing their efforts on real-life computations, with a focus on probabilistic learning, structural learning and approximate inference. PMID:23955561

  8. Probabilistic Design of Composite Structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2006-01-01

    A formal procedure for the probabilistic design evaluation of a composite structure is described. The uncertainties in all aspects of a composite structure (constituent material properties, fabrication variables, structural geometry, and service environments, etc.), which result in the uncertain behavior in the composite structural responses, are included in the evaluation. The probabilistic evaluation consists of: (1) design criteria, (2) modeling of composite structures and uncertainties, (3) simulation methods, and (4) the decision-making process. A sample case is presented to illustrate the formal procedure and to demonstrate that composite structural designs can be probabilistically evaluated with accuracy and efficiency.

  9. Common Difficulties with Probabilistic Reasoning.

    ERIC Educational Resources Information Center

    Hope, Jack A.; Kelly, Ivan W.

    1983-01-01

    Several common errors reflecting difficulties in probabilistic reasoning are identified, relating to ambiguity, previous outcomes, sampling, unusual events, and estimating. Knowledge of these mistakes and interpretations may help mathematics teachers understand the thought processes of their students. (MNS)

  10. A Probabilistic Ontology Development Methodology

    DTIC Science & Technology

    2014-06-01

    to have a tool guiding the user on the steps necessary to create a probabilistic ontology and link this documentation to its implementation … [4...extension that is beyond the scope of this work and includes methods such as ONIONS , FCA-Merge, and PROMPT. The interested reader may find these...construction “It would be interesting to have a tool guiding the user on the steps necessary to create a probabilistic ontology and link this

  11. Probabilistic Risk Assessment: A Bibliography

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Probabilistic risk analysis is an integration of failure modes and effects analysis (FMEA), fault tree analysis and other techniques to assess the potential for failure and to find ways to reduce risk. This bibliography references 160 documents in the NASA STI Database that contain the major concepts, probabilistic risk assessment, risk and probability theory, in the basic index or major subject terms, An abstract is included with most citations, followed by the applicable subject terms.

  12. Molecular shock response of explosives: electronic absorption spectroscopy

    SciTech Connect

    Mcgrne, Shawn D; Moore, David S; Whitley, Von H; Bolme, Cindy A; Eakins, Daniel E

    2009-01-01

    Electronic absorption spectroscopy in the range 400-800 nm was coupled to ultrafast laser generated shocks to begin addressing the question of the extent to which electronic excitations are involved in shock induced reactions. Data are presented on shocked polymethylmethacrylate (PMMA) thin films and single crystal pentaerythritol tetranitrate (PETN). Shocked PMMA exhibited thin film interference effects from the shock front. Shocked PETN exhibited interference from the shock front as well as broadband increased absorption. Relation to shock initiation hypotheses and the need for time dependent absorption data (future experiments) is briefly discussed.

  13. Purification and initial characterization of the 71-kilodalton rat heat-shock protein and its cognate as fatty acid binding proteins.

    PubMed

    Guidon, P T; Hightower, L E

    1986-06-03

    The major rat heat-shock (stress) protein and its cognate were purified to electrophoretic homogeneity from livers of heat-shocked rats. Both proteins exhibited similar behavior on a variety of column chromatography matrices but were separable by preparative isoelectric focusing under nondenaturing conditions by virtue of a 0.2 pH unit difference in isoelectric point. Both purified proteins had similar physical properties, suggesting the possibility that they may have similar biological functions as well. Both proteins were homodimers under nondissociative conditions (Mr 150 000) with isoelectric points of 5.0 (cognate) and 5.2 (major stress protein). After denaturation, both proteins had an increase in isoelectric point of 0.6 pH unit, and the resulting polypeptide chains had apparent molecular weights of 73 000 (cognate) and 71 000 (major stress protein). Similarities in the electrophoretic properties of these two proteins and serum albumin, which also undergoes a large basic shift in isoelectric point due to loss of fatty acids and conformational changes accompanying denaturation, prompted us to search for lipids associated with the purified 71-kilodalton stress protein and its cognate. Thin-layer chromatography of chloroform/methanol extracts of these two proteins revealed nonesterified fatty acids bound to both proteins. Palmitic acid, stearic acid, and a small amount of myristic acid were identified by gas chromatography/mass spectroscopy. Both proteins contained approximately four molecules of fatty acid per dimer with palmitate and stearate present in a one to one molar ratio. Possible roles of the major stress protein and its cognate as fatty acid associated proteins in cellular responses to stress are discussed.

  14. Ensemble postprocessing for probabilistic quantitative precipitation forecasts

    NASA Astrophysics Data System (ADS)

    Bentzien, S.; Friederichs, P.

    2012-12-01

    Precipitation is one of the most difficult weather variables to predict in hydrometeorological applications. In order to assess the uncertainty inherent in deterministic numerical weather prediction (NWP), meteorological services around the globe develop ensemble prediction systems (EPS) based on high-resolution NWP systems. With non-hydrostatic model dynamics and without parameterization of deep moist convection, high-resolution NWP models are able to describe convective processes in more detail and provide more realistic mesoscale structures. However, precipitation forecasts are still affected by displacement errors, systematic biases and fast error growth on small scales. Probabilistic guidance can be achieved from an ensemble setup which accounts for model error and uncertainty of initial and boundary conditions. The German Meteorological Service (Deutscher Wetterdienst, DWD) provides such an ensemble system based on the German-focused limited-area model COSMO-DE. With a horizontal grid-spacing of 2.8 km, COSMO-DE is the convection-permitting high-resolution part of the operational model chain at DWD. The COSMO-DE-EPS consists of 20 realizations of COSMO-DE, driven by initial and boundary conditions derived from 4 global models and 5 perturbations of model physics. Ensemble systems like COSMO-DE-EPS are often limited with respect to ensemble size due to the immense computational costs. As a consequence, they can be biased and exhibit insufficient ensemble spread, and probabilistic forecasts may be not well calibrated. In this study, probabilistic quantitative precipitation forecasts are derived from COSMO-DE-EPS and evaluated at more than 1000 rain gauges located all over Germany. COSMO-DE-EPS is a frequently updated ensemble system, initialized 8 times a day. We use the time-lagged approach to inexpensively increase ensemble spread, which results in more reliable forecasts especially for extreme precipitation events. Moreover, we will show that statistical

  15. Probabilistic theories with purification

    SciTech Connect

    Chiribella, Giulio; D'Ariano, Giacomo Mauro; Perinotti, Paolo

    2010-06-15

    We investigate general probabilistic theories in which every mixed state has a purification, unique up to reversible channels on the purifying system. We show that the purification principle is equivalent to the existence of a reversible realization of every physical process, that is, to the fact that every physical process can be regarded as arising from a reversible interaction of the system with an environment, which is eventually discarded. From the purification principle we also construct an isomorphism between transformations and bipartite states that possesses all structural properties of the Choi-Jamiolkowski isomorphism in quantum theory. Such an isomorphism allows one to prove most of the basic features of quantum theory, like, e.g., existence of pure bipartite states giving perfect correlations in independent experiments, no information without disturbance, no joint discrimination of all pure states, no cloning, teleportation, no programming, no bit commitment, complementarity between correctable channels and deletion channels, characterization of entanglement-breaking channels as measure-and-prepare channels, and others, without resorting to the mathematical framework of Hilbert spaces.

  16. Shock-activated electrochemical power supplies

    DOEpatents

    Benedick, W.B.; Graham, R.A.; Morosin, B.

    1987-04-20

    A shock-activated electrochemical power supply is provided which is initiated extremely rapidly and which has a long shelf life. Electrochemical power supplies of this invention are initiated much faster than conventional thermal batteries. Power supplies of this invention comprise an inactive electrolyte and means for generating a high-pressure shock wave such that the shock wave is propagated through the electrolyte rendering the electrolyte electrochemically active. 2 figs.

  17. Shock-activated electrochemical power supplies

    DOEpatents

    Benedick, William B.; Graham, Robert A.; Morosin, Bruno

    1988-01-01

    A shock-activated electrochemical power supply is provided which is initiated extremely rapidly and which has a long shelf life. Electrochemical power supplies of this invention are initiated much faster than conventional thermal batteries. Power supplies of this invention comprise an inactive electrolyte and means for generating a high-pressure shock wave such that the shock wave is propagated through the electrolytes rendering the electrolyte electrochemically active.

  18. Shock-activated electrochemical power supplies

    DOEpatents

    Benedick, W.B.; Graham, R.A.; Morosin, B.

    1988-11-08

    A shock-activated electrochemical power supply is provided which is initiated extremely rapidly and which has a long shelf life. Electrochemical power supplies of this invention are initiated much faster than conventional thermal batteries. Power supplies of this invention comprise an inactive electrolyte and means for generating a high-pressure shock wave such that the shock wave is propagated through the electrolytes rendering the electrolyte electrochemically active. 2 figs.

  19. Factors Affecting Shock Sensitivity of Energeticv Materials

    NASA Astrophysics Data System (ADS)

    Chakravarty, Avic; Gifford, Michael John; Greenaway, Martin; Proud, William; Field, John

    2001-06-01

    An extensive study has been carried out into the relationships between the particle size of a charge, the density to which it is packed, the presence of inert additives and the sensitivity of the charge to different initiating shocks. The critical parameters for three different shock regimes have been found. The long duration shocks are provided by a commercial detonator, the medium duration shocks are provided by an electrically driven flyer-plate and the short duration shocks are imparted using laser-driven flyer plates. It has been shown that the order of sensitivity of charges to different shock regimes varies. In particular, ultrafine materials have been shown to relatively insensitive to long duration low pressure shocks and sensitive to short duration high pressure shocks. The materials that have been studied include HNS, RDX and PETN.

  20. Probabilistic analysis of manipulation tasks: A research agenda

    SciTech Connect

    Brost, R.C.; Christiansen, A.D.

    1992-10-01

    This paper addresses the problem of manipulation planning in the presence of uncertainty. We begin by reviewing the worst-case planning techniques introduced in and show that these methods are hampered by an information gap inherent to worst-case analysis techniques. As the task uncertainty increases, these methods fail to produce useful information even though a high-quality plan may exist. To fill this gap, we present the probabilistic backprojection, which describes the likelihood that a given action will achieve the task goal from a given initial state. We provide a constructive definition of the probabilistic backprojection and related probabilistic models of manipulation task mechanics, and show how these models unify and enhance several past results in manipulation planning. These models capture the fundamental nature of the task behavior, but appear to be very complex. Methods for computing these models are sketched, but efficient computational methods remain unknown.

  1. Probabilistic analysis of manipulation tasks: A research agenda

    SciTech Connect

    Brost, R.C. ); Christiansen, A.D. )

    1992-01-01

    This paper addresses the problem of manipulation planning in the presence of uncertainty. We begin by reviewing the worst-case planning techniques introduced in and show that these methods are hampered by an information gap inherent to worst-case analysis techniques. As the task uncertainty increases, these methods fail to produce useful information even though a high-quality plan may exist. To fill this gap, we present the probabilistic backprojection, which describes the likelihood that a given action will achieve the task goal from a given initial state. We provide a constructive definition of the probabilistic backprojection and related probabilistic models of manipulation task mechanics, and show how these models unify and enhance several past results in manipulation planning. These models capture the fundamental nature of the task behavior, but appear to be very complex. Methods for computing these models are sketched, but efficient computational methods remain unknown.

  2. Probabilistic logic methods and some applications to biology and medicine.

    PubMed

    Sakhanenko, Nikita A; Galas, David J

    2012-03-01

    For the computational analysis of biological problems-analyzing data, inferring networks and complex models, and estimating model parameters-it is common to use a range of methods based on probabilistic logic constructions, sometimes collectively called machine learning methods. Probabilistic modeling methods such as Bayesian Networks (BN) fall into this class, as do Hierarchical Bayesian Networks (HBN), Probabilistic Boolean Networks (PBN), Hidden Markov Models (HMM), and Markov Logic Networks (MLN). In this review, we describe the most general of these (MLN), and show how the above-mentioned methods are related to MLN and one another by the imposition of constraints and restrictions. This approach allows us to illustrate a broad landscape of constructions and methods, and describe some of the attendant strengths, weaknesses, and constraints of many of these methods. We then provide some examples of their applications to problems in biology and medicine, with an emphasis on genetics. The key concepts needed to picture this landscape of methods are the ideas of probabilistic graphical models, the structures of the graphs, and the scope of the logical language repertoire used (from First-Order Logic [FOL] to Boolean logic.) These concepts are interlinked and together define the nature of each of the probabilistic logic methods. Finally, we discuss the initial applications of MLN to genetics, show the relationship to less general methods like BN, and then mention several examples where such methods could be effective in new applications to specific biological and medical problems.

  3. Probabilistic quantum teleportation in the presence of noise

    NASA Astrophysics Data System (ADS)

    Fortes, Raphael; Rigolin, Gustavo

    2016-06-01

    We extend the research program initiated in [Phys. Rev. A 92, 012338 (2015), 10.1103/PhysRevA.92.012338] from noisy deterministic teleportation protocols to noisy probabilistic (conditional) protocols. Our main goal now is to study how we can increase the fidelity of the teleported state in the presence of noise by working with probabilistic protocols. We work with several scenarios involving the most common types of noise in realistic implementations of quantum communication tasks and find many cases where adding more noise to the probabilistic protocol increases considerably the fidelity of the teleported state, without decreasing the probability of a successful run of the protocol. Also, there are cases where the entanglement of the channel connecting Alice and Bob leading to the greatest fidelity is not maximal. Moreover, there exist cases where the optimal fidelity for the probabilistic protocols are greater than the maximal fidelity (2 /3 ) achievable by using only classical resources, while the optimal ones for the deterministic protocols under the same conditions lie below this limit. This result clearly illustrates that in some cases we can only get a truly quantum teleportation if we use probabilistic instead of deterministic protocols.

  4. PROBABILISTIC INFORMATION INTEGRATION TECHNOLOGY

    SciTech Connect

    J. BOOKER; M. MEYER; ET AL

    2001-02-01

    The Statistical Sciences Group at Los Alamos has successfully developed a structured, probabilistic, quantitative approach for the evaluation of system performance based on multiple information sources, called Information Integration Technology (IIT). The technology integrates diverse types and sources of data and information (both quantitative and qualitative), and their associated uncertainties, to develop distributions for performance metrics, such as reliability. Applications include predicting complex system performance, where test data are lacking or expensive to obtain, through the integration of expert judgment, historical data, computer/simulation model predictions, and any relevant test/experimental data. The technology is particularly well suited for tracking estimated system performance for systems under change (e.g. development, aging), and can be used at any time during product development, including concept and early design phases, prior to prototyping, testing, or production, and before costly design decisions are made. Techniques from various disciplines (e.g., state-of-the-art expert elicitation, statistical and reliability analysis, design engineering, physics modeling, and knowledge management) are merged and modified to develop formal methods for the data/information integration. The power of this technology, known as PREDICT (Performance and Reliability Evaluation with Diverse Information Combination and Tracking), won a 1999 R and D 100 Award (Meyer, Booker, Bement, Kerscher, 1999). Specifically the PREDICT application is a formal, multidisciplinary process for estimating the performance of a product when test data are sparse or nonexistent. The acronym indicates the purpose of the methodology: to evaluate the performance or reliability of a product/system by combining all available (often diverse) sources of information and then tracking that performance as the product undergoes changes.

  5. Probabilistic exposure fusion.

    PubMed

    Song, Mingli; Tao, Dacheng; Chen, Chun; Bu, Jiajun; Luo, Jiebo; Zhang, Chengqi

    2012-01-01

    The luminance of a natural scene is often of high dynamic range (HDR). In this paper, we propose a new scheme to handle HDR scenes by integrating locally adaptive scene detail capture and suppressing gradient reversals introduced by the local adaptation. The proposed scheme is novel for capturing an HDR scene by using a standard dynamic range (SDR) device and synthesizing an image suitable for SDR displays. In particular, we use an SDR capture device to record scene details (i.e., the visible contrasts and the scene gradients) in a series of SDR images with different exposure levels. Each SDR image responds to a fraction of the HDR and partially records scene details. With the captured SDR image series, we first calculate the image luminance levels, which maximize the visible contrasts, and then the scene gradients embedded in these images. Next, we synthesize an SDR image by using a probabilistic model that preserves the calculated image luminance levels and suppresses reversals in the image luminance gradients. The synthesized SDR image contains much more scene details than any of the captured SDR image. Moreover, the proposed scheme also functions as the tone mapping of an HDR image to the SDR image, and it is superior to both global and local tone mapping operators. This is because global operators fail to preserve visual details when the contrast ratio of a scene is large, whereas local operators often produce halos in the synthesized SDR image. The proposed scheme does not require any human interaction or parameter tuning for different scenes. Subjective evaluations have shown that it is preferred over a number of existing approaches.

  6. Simulations of Converging Shock Collisions for Shock Ignition

    NASA Astrophysics Data System (ADS)

    Sauppe, Joshua; Dodd, Evan; Loomis, Eric

    2016-10-01

    Shock ignition (SI) has been proposed as an alternative to achieving high gain in inertial confinement fusion (ICF) targets. A central hot spot below the ignition threshold is created by an initial compression pulse, and a second laser pulse drives a strong converging shock into the fuel. The collision between the rebounding shock from the compression pulse and the converging shock results in amplification of the converging shock and increases the hot spot pressure above the ignition threshold. We investigate shock collision in SI drive schemes for cylindrical targets with a polystyrene foam interior using radiation-hydrodynamics simulations with the RAGE code. The configuration is similar to previous targets fielded on the Omega laser. The CH interior results in a lower convergence ratio and the cylindrical geometry facilitates visualization of the shock transit using an axial X-ray backlighter, both of which are important for comparison to potential experimental measurements. One-dimensional simulations are used to determine shock timing, and the effects of low mode asymmetries in 2D computations are also quantified. LA-UR-16-24773.

  7. Is this septic shock? A rare case of distributive shock.

    PubMed

    Val-Flores, Luis Silva; Fior, Alberto; Santos, Ana; Reis, Luís; Bento, Luís

    2014-01-01

    The authors report a rare case of shock in a patient without significant clinical history, admitted to the intensive care unit for suspected septic shock. The patient was initially treated with fluid therapy without improvement. A hypothesis of systemic capillary leak syndrome was postulated following the confirmation of severe hypoalbuminemia, hypotension, and hemoconcentration--a combination of three symptoms typical of the disease. The authors discussed the differential diagnosis and also conducted a review of the diagnosis and treatment of the disease.

  8. Is this septic shock? A rare case of distributive shock

    PubMed Central

    Val-Flores, Luis Silva; Fior, Alberto; Santos, Ana; Reis, Luís; Bento, Luís

    2014-01-01

    The authors report a rare case of shock in a patient without significant clinical history, admitted to the intensive care unit for suspected septic shock. The patient was initially treated with fluid therapy without improvement. A hypothesis of systemic capillary leak syndrome was postulated following the confirmation of severe hypoalbuminemia, hypotension, and hemoconcentration - a combination of three symptoms typical of the disease. The authors discussed the differential diagnosis and also conducted a review of the diagnosis and treatment of the disease. PMID:25607273

  9. Septic Shock

    PubMed Central

    Lansing, Allan M.

    1963-01-01

    Septic shock may be defined as hypotension caused by bacteremia and accompanied by decreased peripheral blood flow, evidenced by oliguria. Clinically, a shaking chill is the warning signal. The immediate cause of hypotension is pooling of blood in the periphery, leading to decreased venous return: later, peripheral resistance falls and cardiac failure may occur. Irreversible shock is comparable to massive reactive hyperemia. Reticuloendothelial failure, histamine release, and toxic hypersensitivity may be factors in the pathogenesis of septic shock. Adrenal failure does not usually occur, but large doses of corticosteroid are employed therapeutically to counteract the effect of histamine release or hypersensitivity to endotoxin. The keys to successful therapy are time, antibiotics, vasopressors, cortisone and correction of acidosis. PMID:14063936

  10. Intense shock waves and shock-compressed gas flows in the channels of rail accelerators

    NASA Astrophysics Data System (ADS)

    Bobashev, S. V.; Zhukov, B. G.; Kurakin, R. O.; Ponyaev, S. A.; Reznikov, B. I.; Tverdokhlebov, K. V.

    2015-01-01

    Shock wave generation and shock-compressed gas flows attendant on the acceleration of an striker-free plasma piston in the channels of electromagnetic rail accelerators (railguns) are studied. Experiments are carried out in channels filled with helium or argon to an initial pressure of 25-500 Torr. At a pressure of 25 Torr, Mach numbers equal 32 in argon and 16 in helium. It is found that with the initial currents and gas initial densities in the channels being the same, the shock wave velocities in both gases almost coincide. Unlike standard shock tubes, a high electric field (up to 300 V/cm) present in the channel governs the motion of a shock-compressed layer. Once the charged particle concentration behind the shock wave becomes sufficiently high, the field causes part of the discharge current to pass through the shock-compressed layer. As a result, the glow of the layer becomes much more intense.

  11. Converging cylindrical shocks in ideal magnetohydrodynamics

    SciTech Connect

    Pullin, D. I.; Mostert, W.; Wheatley, V.; Samtaney, R.

    2014-09-15

    We consider a cylindrically symmetrical shock converging onto an axis within the framework of ideal, compressible-gas non-dissipative magnetohydrodynamics (MHD). In cylindrical polar co-ordinates we restrict attention to either constant axial magnetic field or to the azimuthal but singular magnetic field produced by a line current on the axis. Under the constraint of zero normal magnetic field and zero tangential fluid speed at the shock, a set of restricted shock-jump conditions are obtained as functions of the shock Mach number, defined as the ratio of the local shock speed to the unique magnetohydrodynamic wave speed ahead of the shock, and also of a parameter measuring the local strength of the magnetic field. For the line current case, two approaches are explored and the results compared in detail. The first is geometrical shock-dynamics where the restricted shock-jump conditions are applied directly to the equation on the characteristic entering the shock from behind. This gives an ordinary-differential equation for the shock Mach number as a function of radius which is integrated numerically to provide profiles of the shock implosion. Also, analytic, asymptotic results are obtained for the shock trajectory at small radius. The second approach is direct numerical solution of the radially symmetric MHD equations using a shock-capturing method. For the axial magnetic field case the shock implosion is of the Guderley power-law type with exponent that is not affected by the presence of a finite magnetic field. For the axial current case, however, the presence of a tangential magnetic field ahead of the shock with strength inversely proportional to radius introduces a length scale R=√(μ{sub 0}/p{sub 0}) I/(2 π) where I is the current, μ{sub 0} is the permeability, and p{sub 0} is the pressure ahead of the shock. For shocks initiated at r ≫ R, shock convergence is first accompanied by shock strengthening as for the strictly gas-dynamic implosion. The

  12. Converging cylindrical shocks in ideal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Pullin, D. I.; Mostert, W.; Wheatley, V.; Samtaney, R.

    2014-09-01

    We consider a cylindrically symmetrical shock converging onto an axis within the framework of ideal, compressible-gas non-dissipative magnetohydrodynamics (MHD). In cylindrical polar co-ordinates we restrict attention to either constant axial magnetic field or to the azimuthal but singular magnetic field produced by a line current on the axis. Under the constraint of zero normal magnetic field and zero tangential fluid speed at the shock, a set of restricted shock-jump conditions are obtained as functions of the shock Mach number, defined as the ratio of the local shock speed to the unique magnetohydrodynamic wave speed ahead of the shock, and also of a parameter measuring the local strength of the magnetic field. For the line current case, two approaches are explored and the results compared in detail. The first is geometrical shock-dynamics where the restricted shock-jump conditions are applied directly to the equation on the characteristic entering the shock from behind. This gives an ordinary-differential equation for the shock Mach number as a function of radius which is integrated numerically to provide profiles of the shock implosion. Also, analytic, asymptotic results are obtained for the shock trajectory at small radius. The second approach is direct numerical solution of the radially symmetric MHD equations using a shock-capturing method. For the axial magnetic field case the shock implosion is of the Guderley power-law type with exponent that is not affected by the presence of a finite magnetic field. For the axial current case, however, the presence of a tangential magnetic field ahead of the shock with strength inversely proportional to radius introduces a length scale R=sqrt{μ _0/p_0} I/(2 π ) where I is the current, μ0 is the permeability, and p0 is the pressure ahead of the shock. For shocks initiated at r ≫ R, shock convergence is first accompanied by shock strengthening as for the strictly gas-dynamic implosion. The diverging magnetic field

  13. Vagueness as Probabilistic Linguistic Knowledge

    NASA Astrophysics Data System (ADS)

    Lassiter, Daniel

    Consideration of the metalinguistic effects of utterances involving vague terms has led Barker [1] to treat vagueness using a modified Stalnakerian model of assertion. I present a sorites-like puzzle for factual beliefs in the standard Stalnakerian model [28] and show that it can be resolved by enriching the model to make use of probabilistic belief spaces. An analogous problem arises for metalinguistic information in Barker's model, and I suggest that a similar enrichment is needed here as well. The result is a probabilistic theory of linguistic representation that retains a classical metalanguage but avoids the undesirable divorce between meaning and use inherent in the epistemic theory [34]. I also show that the probabilistic approach provides a plausible account of the sorites paradox and higher-order vagueness and that it fares well empirically and conceptually in comparison to leading competitors.

  14. Probabilistic inversion: a preliminary discussion

    NASA Astrophysics Data System (ADS)

    Battista Rossi, Giovanni; Crenna, Francesco

    2015-02-01

    We continue the discussion on the possibility of interpreting probability as a logic, that we have started in the previous IMEKO TC1-TC7-TC13 Symposium. We show here how a probabilistic logic can be extended up to including direct and inverse functions. We also discuss the relationship between this framework and the Bayes-Laplace rule, showing how the latter can be formally interpreted as a probabilistic inversion device. We suggest that these findings open a new perspective in the evaluation of measurement uncertainty.

  15. Some characteristics of probabilistic one-sided splicing systems

    NASA Astrophysics Data System (ADS)

    Selvarajoo, Mathuri; Fong, Wan Heng; Sarmin, Nor Haniza; Turaev, Sherzod

    2013-04-01

    A theoretical model for DNA computing using the recombination behavior of DNA molecules known as asplicing system has been introduced in 1987. Splicing systems are based on the splicing operation which, informally, cuts two strings at the specific places and attaches the prefix of the first string to the suffix of the second string and the prefix of the second string to the suffix of the first string yielding the new strings. It is known that splicing systems with finite sets of axioms and splicing rules only generate regular languages. Hence, different types of restrictions for splicing systems have been considered to increase the computational power of the languages generated. Recently, probabilistic splicing systems have been introduced where the probabilities are initially associated with the axioms, and the probabilities of the generated strings are computed from the probabilities of the initial strings. In this paper, some properties of probabilistic one-sided splicing systems, which are special types of probabilistic splicing systems, are investigated. We prove that probabilistic one-sided splicing systems can also increase the computational power of the languages generated.

  16. Impact of Shock Front Nonstationarity on the Acceleration of Heavy Ions by Perpendicular Collisionless Shocks

    NASA Astrophysics Data System (ADS)

    Yang, Z.; Lembege, B.; Lu, Q.

    2010-12-01

    Both hybrid /full particle simulations and recent experimental results have clearly evidenced that the front of a supercritical quasi-perpendicular shock can be nonstationary and corresponds to the self-reformation of the front itself being due to the accumulation of reflected ions. Not only the amplitude but also the spatial scales of fields components at the front (ramp and foot) are strongly varying within each cycle of the self- reformation. On the other hand, several studies have been made on the acceleration and heating of heavy ions but most have been restricted to a stationary shock profile only. Herein, one-dimensional test particle simulations with fields components issued from self-consistent 1D PIC simulation are performed in order to investigate the impact of shock front non-stationarity on heavy ion acceleration (He, O, Fe). Reflection and acceleration mechanisms of heavy ions with different initial thermal velocities and different charge-mass ratios interacting with a non-stationary shock front (self-reformation) are analyzed in detail. Present preliminary results show that: (i) the heavy ions suffer shock drift acceleration (SDA) and shock surfing acceleration (SSA) mechanisms and will be compared with previous works; (ii) the fraction of reflected heavy ions increases with initial kinetic energy, charge-mass ratio and decreasing shock front width at both stationary shock (situation equivalent to fixed shock regime) and non-stationary shocks (situation equivalent to a continously time-evolving shock regime); (iii) the shock front non-stationarity facilitates the reflection of heavy ions for broad (rather than narrow) shock profiles; (iv) high energy part of Fe/O ratio spectra at a non-stationary shock decreases with shock ramp width. The impact of the shock front non-stationarity on the heavy ions spectra within the shock front region and the downstream region will be also discussed.

  17. Molecular Shock Response of Explosives: Electronic Absorption Spectroscopy

    NASA Astrophysics Data System (ADS)

    McGrane, S. D.; Moore, D. S.; Whitley, V. H.; Bolme, C. A.; Eakins, D. E.

    2009-12-01

    Electronic absorption spectroscopy in the range 400-800 nm was coupled to ultrafast laser generated shocks to begin addressing the extent to which electronic excitations are involved in shock induced reactions. Data are presented on shocked polymethylmethacrylate (PMMA) thin films and single crystal pentaerythritol tetranitrate (PETN). Shocked PMMA exhibited thin film interference effects from the shock front. Shocked PETN exhibited interference as well as broadband increased absorption. Relation to shock initiation and the need for time dependent absorption (future experiments) is briefly discussed.

  18. Probabilistic assessment of composite structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael E.; Abumeri, Galib H.; Chamis, Christos C.

    1993-01-01

    A general computational simulation methodology for an integrated probabilistic assessment of composite structures is discussed and demonstrated using aircraft fuselage (stiffened composite cylindrical shell) structures with rectangular cutouts. The computational simulation was performed for the probabilistic assessment of the structural behavior including buckling loads, vibration frequencies, global displacements, and local stresses. The scatter in the structural response is simulated based on the inherent uncertainties in the primitive (independent random) variables at the fiber matrix constituent, ply, laminate, and structural scales that describe the composite structures. The effect of uncertainties due to fabrication process variables such as fiber volume ratio, void volume ratio, ply orientation, and ply thickness is also included. The methodology has been embedded in the computer code IPACS (Integrated Probabilistic Assessment of Composite Structures). In addition to the simulated scatter, the IPACS code also calculates the sensitivity of the composite structural behavior to all the primitive variables that influence the structural behavior. This information is useful for assessing reliability and providing guidance for improvement. The results from the probabilistic assessment for the composite structure with rectangular cutouts indicate that the uncertainty in the longitudinal ply stress is mainly caused by the uncertainty in the laminate thickness, and the large overlap of the scatter in the first four buckling loads implies that the buckling mode shape for a specific buckling load can be either of the four modes.

  19. Research on probabilistic information processing

    NASA Technical Reports Server (NTRS)

    Edwards, W.

    1973-01-01

    The work accomplished on probabilistic information processing (PIP) is reported. The research proposals and decision analysis are discussed along with the results of research on MSC setting, multiattribute utilities, and Bayesian research. Abstracts of reports concerning the PIP research are included.

  20. Making Probabilistic Relational Categories Learnable

    ERIC Educational Resources Information Center

    Jung, Wookyoung; Hummel, John E.

    2015-01-01

    Theories of relational concept acquisition (e.g., schema induction) based on structured intersection discovery predict that relational concepts with a probabilistic (i.e., family resemblance) structure ought to be extremely difficult to learn. We report four experiments testing this prediction by investigating conditions hypothesized to facilitate…

  1. Shock-induced chemistry in organic materials

    SciTech Connect

    Dattelbaum, Dana M; Sheffield, Steve; Engelke, Ray; Manner, Virginia; Chellappa, Raja; Yoo, Choong - Shik

    2011-01-20

    The combined 'extreme' environments of high pressure, temperature, and strain rates, encountered under shock loading, offer enormous potential for the discovery of new paradigms in chemical reactivity not possible under more benign conditions. All organic materials are expected to react under these conditions, yet we currently understand very little about the first bond-breaking steps behind the shock front, such as in the shock initiation of explosives, or shock-induced reactivity of other relevant materials. Here, I will present recent experimental results of shock-induced chemistry in a variety of organic materials under sustained shock conditions. A comparison between the reactivity of different structures is given, and a perspective on the kinetics of reaction completion under shock drives.

  2. Factors Affecting Shock Sensitivity of Energetic Materials

    NASA Astrophysics Data System (ADS)

    Chakravarty, A.; Gifford, M. J.; Greenaway, M. W.; Proud, W. G.; Field, J. E.

    2002-07-01

    An extensive study has been carried out into the relationships between the particle size of a charge, the density to which it is packed, the presence of inert additives and the sensitivity of the charge to different initiating shocks. The critical parameters for two different shock regimes have been found. The long duration shocks are provided by a commercial detonator and the short duration shocks are imparted using laser-driven flyer plates. It has been shown that the order of sensitivity of charges to different shock regimes varies. In particular, ultrafine materials have been shown to be relatively insensitive to long duration low pressure shocks and sensitive to short duration high pressure shocks. The materials that have been studied include HNS, RDX and PETN.

  3. Chondrule Destruction in Nebular Shocks

    NASA Astrophysics Data System (ADS)

    Jacquet, Emmanuel; Thompson, Christopher

    2014-12-01

    Chondrules are millimeter-sized silicate spherules ubiquitous in primitive meteorites, but whose origin remains mysterious. One of the main proposed mechanisms for producing them is melting of solids in shock waves in the gaseous protoplanetary disk. However, evidence is mounting that chondrule-forming regions were enriched in solids well above solar abundances. Given the high velocities involved in shock models, destructive collisions would be expected between differently sized grains after passage of the shock front as a result of differential drag. We investigate the probability and outcome of collisions of particles behind a one-dimensional shock using analytic methods as well as a full integration of the coupled mass, momentum, energy, and radiation equations. Destruction of protochondrules seems unavoidable for solid/gas ratios epsilon >~ 0.1, and possibly even for solar abundances because of "sandblasting" by finer dust. A flow with epsilon >~ 10 requires much smaller shock velocities (~2 versus 8 km s-1) in order to achieve chondrule-melting temperatures, and radiation trapping allows slow cooling of the shocked fragments. Initial destruction would still be extensive; although re-assembly of millimeter-sized particles would naturally occur by grain sticking afterward, the compositional heterogeneity of chondrules may be difficult to reproduce. We finally note that solids passing through small-scale bow shocks around few kilometer-sized planetesimals might experience partial melting and yet escape fragmentation.

  4. Weak and strong probabilistic solutions for a stochastic quasilinear parabolic equation with nonstandard growth

    NASA Astrophysics Data System (ADS)

    Ali, Z. I.; Sango, M.

    2016-07-01

    In this paper, we investigate a class of stochastic quasilinear parabolic initial boundary value problems with nonstandard growth in the functional setting of generalized Sobolev spaces. The deterministic version of the equation was first introduced and studied by Samokhin in [45] as a generalized model for polytropic filtration. We establish an existence result of weak probabilistic solutions when the forcing terms do not satisfy Lipschitz conditions. Under the Lipschitz property of the forcing terms, we obtain the uniqueness of weak probabilistic solutions. Combining the uniqueness and the famous Yamada-Watanabe result, we prove the existence of a unique strong probabilistic solution of the problem.

  5. Management of Shock in Neonates.

    PubMed

    Bhat, B Vishnu; Plakkal, Nishad

    2015-10-01

    Shock is characterized by inadequate oxygen delivery to the tissues, and is more frequent in very low birth weight infants, especially in the first few days of life. Shock is an independent predictor of mortality, and the survivors are at a higher risk of neurologic impairment. Understanding the pathophysiology helps to recognize and classify shock in the early compensated phase and initiate appropriate treatment. Hypovolemia is rarely the primary cause of shock in neonates. Myocardial dysfunction is especially common in extremely preterm infants, and in term infants with perinatal asphyxia. Blood pressure measurements are easy, but correlate poorly with cerebral and systemic blood flows. Point-of-care cardiac ultrasound can help in individualized assessment of problems, selecting appropriate therapy and monitoring response, but may not always be available, and long-term benefits need to be demonstrated. The use of near-infrared spectroscopy to guide treatment of neonatal shock is currently experimental. In the absence of hypovolemia, excessive administration of fluid boluses is inappropriate therapy. Dobutamine and dopamine are the most common initial inotropes used in neonatal shock. Dobutamine has been shown to improve systemic blood flow, especially in very low birth weight infants, but dopamine is better at improving blood pressure in hypotensive infants. Newer inodilators including milrinone and levosimendan may be useful in selected settings. Data on long-term survival and neurologic outcomes following different management strategies are scarce and future research efforts should focus on this.

  6. Probabilistic load simulation: Code development status

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Ho, H.

    1991-01-01

    The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.

  7. Probabilistic load simulation: Code development status

    NASA Astrophysics Data System (ADS)

    Newell, J. F.; Ho, H.

    1991-05-01

    The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.

  8. Shocks in the Early Universe

    NASA Astrophysics Data System (ADS)

    Pen, Ue-Li; Turok, Neil

    2016-09-01

    We point out a surprising consequence of the usually assumed initial conditions for cosmological perturbations. Namely, a spectrum of Gaussian, linear, adiabatic, scalar, growing mode perturbations not only creates acoustic oscillations of the kind observed on very large scales today, it also leads to the production of shocks in the radiation fluid of the very early Universe. Shocks cause departures from local thermal equilibrium as well as create vorticity and gravitational waves. For a scale-invariant spectrum and standard model physics, shocks form for temperatures 1 GeV shock formation and the consequent gravitational wave emission provide a signal detectable by current and planned gravitational wave experiments, allowing them to strongly constrain conditions present in the primordial Universe as early as 10-30 sec after the big bang.

  9. Undercuts by Laser Shock Forming

    SciTech Connect

    Wielage, Hanna; Vollertsen, Frank

    2011-05-04

    In laser shock forming TEA-CO{sub 2}-laser induced shock waves are used to form metal foils, such as aluminum or copper. The process utilizes an initiated plasma shock wave on the target surface, which leads to a forming of the foil. A challenge in forming technologies is the manufacturing of undercuts. By conventional forming methods these special forms are not feasible. In this article, it is presented that undercuts in the micro range can be produced by laser shock deep drawing. Different drawing die diameters, drawing die depths and the material aluminum in the thicknesses 20 and 50 {mu}m were investigated. It will be presented that smaller die diameters facilitate undercuts compared to bigger die diameters. The phenomena can be explained by Barlow's formula. Furthermore, it is shown which maximum undercut depth at different die diameters can be reached. To this end, cross-sections of the different parameter combinations are displayed.

  10. Shocks in the Early Universe.

    PubMed

    Pen, Ue-Li; Turok, Neil

    2016-09-23

    We point out a surprising consequence of the usually assumed initial conditions for cosmological perturbations. Namely, a spectrum of Gaussian, linear, adiabatic, scalar, growing mode perturbations not only creates acoustic oscillations of the kind observed on very large scales today, it also leads to the production of shocks in the radiation fluid of the very early Universe. Shocks cause departures from local thermal equilibrium as well as create vorticity and gravitational waves. For a scale-invariant spectrum and standard model physics, shocks form for temperatures 1  GeVshock formation and the consequent gravitational wave emission provide a signal detectable by current and planned gravitational wave experiments, allowing them to strongly constrain conditions present in the primordial Universe as early as 10^{-30}  sec after the big bang.

  11. Probabilistic Flash Flood Forecasting using Stormscale Ensembles

    NASA Astrophysics Data System (ADS)

    Hardy, J.; Gourley, J. J.; Kain, J. S.; Clark, A.; Novak, D.; Hong, Y.

    2013-12-01

    Flash flooding is one of the most costly and deadly natural hazards in the US and across the globe. The loss of life and property from flash floods could be mitigated with better guidance from hydrological models, but these models have limitations. For example, they are commonly initialized using rainfall estimates derived from weather radars, but the time interval between observations of heavy rainfall and a flash flood can be on the order of minutes, particularly for small basins in urban settings. Increasing the lead time for these events is critical for protecting life and property. Therefore, this study advances the use of quantitative precipitation forecasts (QPFs) from a stormscale NWP ensemble system into a distributed hydrological model setting to yield basin-specific, probabilistic flash flood forecasts (PFFFs). Rainfall error characteristics of the individual members are first diagnosed and quantified in terms of structure, amplitude, and location (SAL; Wernli et al., 2008). Amplitude and structure errors are readily correctable due to their diurnal nature, and the fine scales represented by the CAPS QPF members are consistent with radar-observed rainfall, mainly showing larger errors with afternoon convection. To account for the spatial uncertainty of the QPFs, we use an elliptic smoother, as in Marsh et al. (2012), to produce probabilistic QPFs (PQPFs). The elliptic smoother takes into consideration underdispersion, which is notoriously associated with stormscale ensembles, and thus, is good for targeting the approximate regions that may receive heavy rainfall. However, stormscale details contained in individual members are still needed to yield reasonable flash flood simulations. Therefore, on a case study basis, QPFs from individual members are then run through the hydrological model with their predicted structure and corrected amplitudes, but the locations of individual rainfall elements are perturbed within the PQPF elliptical regions using Monte

  12. Potential change in flaw geometry of an initially shallow finite-length surface flaw during a pressurized-thermal-shock transient

    SciTech Connect

    Shum, D.K.; Bryson, J.W.; Merkle, J.G.

    1993-09-01

    This study presents preliminary estimates on whether an shallow, axially oriented, inner-surface finite-length flaw in a PWR-RPV would tend to elongate in the axial direction and/or deepen into the wall of the vessel during a postulated PTS transient. Analysis results obtained based on the assumptions of (1) linear-elastic material response, and (2) cladding with same toughness as the base metal, indicate that a nearly semicircular flaw would likely propagate in the axial direction followed by propagation into the wall of the vessel. Note that these results correspond to initiation within the lower-shelf fracture toughness temperature range, and that their general validity within the lower-transition temperature range remains to be determined. The sensitivity of the numerical results aid conclusions to the following analysis assumptions are evaluated: (1) reference flaw geometry along the entire crack front and especially within the cladding region; (2) linear-elastic vs elastic-plastic description of material response; and (3) base-material-only vs bimaterial cladding-base vessel-model assumption. The sensitivity evaluation indicates that the analysis results are very sensitive to the above assumptions.

  13. The Probabilistic Admissible Region with Additional Constraints

    NASA Astrophysics Data System (ADS)

    Roscoe, C.; Hussein, I.; Wilkins, M.; Schumacher, P.

    The admissible region, in the space surveillance field, is defined as the set of physically acceptable orbits (e.g., orbits with negative energies) consistent with one or more observations of a space object. Given additional constraints on orbital semimajor axis, eccentricity, etc., the admissible region can be constrained, resulting in the constrained admissible region (CAR). Based on known statistics of the measurement process, one can replace hard constraints with a probabilistic representation of the admissible region. This results in the probabilistic admissible region (PAR), which can be used for orbit initiation in Bayesian tracking and prioritization of tracks in a multiple hypothesis tracking framework. The PAR concept was introduced by the authors at the 2014 AMOS conference. In that paper, a Monte Carlo approach was used to show how to construct the PAR in the range/range-rate space based on known statistics of the measurement, semimajor axis, and eccentricity. An expectation-maximization algorithm was proposed to convert the particle cloud into a Gaussian Mixture Model (GMM) representation of the PAR. This GMM can be used to initialize a Bayesian filter. The PAR was found to be significantly non-uniform, invalidating an assumption frequently made in CAR-based filtering approaches. Using the GMM or particle cloud representations of the PAR, orbits can be prioritized for propagation in a multiple hypothesis tracking (MHT) framework. In this paper, the authors focus on expanding the PAR methodology to allow additional constraints, such as a constraint on perigee altitude, to be modeled in the PAR. This requires re-expressing the joint probability density function for the attributable vector as well as the (constrained) orbital parameters and range and range-rate. The final PAR is derived by accounting for any interdependencies between the parameters. Noting that the concepts presented are general and can be applied to any measurement scenario, the idea

  14. Overview of the Integrated Pressurized Thermal-Shock (IPTS) study

    SciTech Connect

    Cheverton, R.D.

    1990-01-01

    By the early 1980s, (PTS)-related, deterministic, vessel-integrity studies sponsored by the US Nuclear Regulatory Commission (NRC) indicated a potential for failure of some PWR vessels before design end of life, in the event of a postulated severe PTS transient. In response, the NRC established screening criteria, in the form of limiting values of the reference nil-ductility transition temperature (RT{sub NDT}), and initiated the development of a probabilistic methodology for evaluating vessel integrity. This latter effort, referred to as the Integrated Pressurized Thermal-Shock (IPTS) Program, included development of techniques for postulating PTS transients, estimating their frequencies, and calculating the probability of vessel failure for a specific transient. Summing the products of frequency of transient and conditional probability of failure for each of the many postulated transients provide a calculated value of the frequency of failure. The IPTS Program also included the application of the IPTS methodology to three US PWR plants (Oconee-1, Calvert Cliffs-1, and HBRobinson-2) and the specification of a maximum permissible value of the calculated frequency of vessel failure. Another important purpose of the IPTS study was to determine, through application of the IPTS methodology, which design and operating features, parameters, and PTS transients were dominant in affecting the calculated frequency of failure. The scope of the IPTS Program included the development of a probabilistic fracture-mechanics capability, modification of the TRAC and RELAP5 thermal/hydraulic codes, and development of the methodology for estimating the uncertainty in the calculated frequency of vessel failure.

  15. Probabilistic Assessment of Fracture Progression in Composite Structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Minnetyan, Levon; Mauget, Bertrand; Huang, Dade; Addi, Frank

    1999-01-01

    This report describes methods and corresponding computer codes that are used to evaluate progressive damage and fracture and to perform probabilistic assessment in built-up composite structures. Structural response is assessed probabilistically, during progressive fracture. The effects of design variable uncertainties on structural fracture progression are quantified. The fast probability integrator (FPI) is used to assess the response scatter in the composite structure at damage initiation. The sensitivity of the damage response to design variables is computed. The methods are general purpose and are applicable to stitched and unstitched composites in all types of structures and fracture processes starting from damage initiation to unstable propagation and to global structure collapse. The methods are demonstrated for a polymer matrix composite stiffened panel subjected to pressure. The results indicated that composite constituent properties, fabrication parameters, and respective uncertainties have a significant effect on structural durability and reliability. Design implications with regard to damage progression, damage tolerance, and reliability of composite structures are examined.

  16. Developing Probabilistic Safety Performance Margins for Unknown and Underappreciated Risks

    NASA Technical Reports Server (NTRS)

    Benjamin, Allan; Dezfuli, Homayoon; Everett, Chris

    2015-01-01

    Probabilistic safety requirements currently formulated or proposed for space systems, nuclear reactor systems, nuclear weapon systems, and other types of systems that have a low-probability potential for high-consequence accidents depend on showing that the probability of such accidents is below a specified safety threshold or goal. Verification of compliance depends heavily upon synthetic modeling techniques such as PRA. To determine whether or not a system meets its probabilistic requirements, it is necessary to consider whether there are significant risks that are not fully considered in the PRA either because they are not known at the time or because their importance is not fully understood. The ultimate objective is to establish a reasonable margin to account for the difference between known risks and actual risks in attempting to validate compliance with a probabilistic safety threshold or goal. In this paper, we examine data accumulated over the past 60 years from the space program, from nuclear reactor experience, from aircraft systems, and from human reliability experience to formulate guidelines for estimating probabilistic margins to account for risks that are initially unknown or underappreciated. The formulation includes a review of the safety literature to identify the principal causes of such risks.

  17. Shock tubes and waves; Proceedings of the Thirteenth International Symposium, Niagara Falls, NY, July 6-9, 1981

    NASA Astrophysics Data System (ADS)

    Treanor, C. E.; Hall, J. G.

    1982-10-01

    The present conference on shock tubes and waves considers shock tube drivers, luminous shock tubes, shock tube temperature and pressure measurement, shock front distortion in real gases, nonlinear standing waves, transonic flow shock wave turbulent boundary interactions, wall roughness effects on reflected shock bifurcation, argon thermal conductivity, pattern generation in gaseous detonations, cylindrical resonators, shock tunnel-produced high gain lasers, fluid dynamic aspects of laser-metal interaction, and the ionization of argon gas behind reflected shock waves. Also discussed are the ionization relaxation of shock-heated plasmas and gases, discharge flow/shock tube studies of singlet oxygen, rotational and vibrational relaxation, chemiluminescence thermal and shock wave decomposition of hydrogen cyanide and hydrogen azide, shock wave structure in gas-particle mixtures at low Mach numbers, binary nucleation in a Ludwieg tube, shock liquefaction experiments, pipeline explosions, the shock wave ignition of pulverized coal, and shock-initiated methane combustion.

  18. Hydrodynamic Simulations of Gaseous Argon Shock Experiments

    NASA Astrophysics Data System (ADS)

    Garcia, Daniel; Dattelbaum, Dana; Goodwin, Peter; Morris, John; Sheffield, Stephen; Burkett, Michael

    2015-06-01

    The lack of published Argon gas shock data motivated an evaluation of the Argon Equation of State (EOS) in gas phase initial density regimes never before reached. In particular, these regimes include initial pressures in the range of 200-500 psi (0.025 - 0.056 g/cc) and initial shock velocities around 0.2 cm/ μs. The objective of the numerical evaluation was to develop a physical understanding of the EOS behavior of shocked and subsequently multiply re-shocked Argon gas initially pressurized to 200-500 psi through Pagosa numerical hydrodynamic simulations utilizing the SESAME equation of state. Pagosa is a Los Alamos National Laboratory 2-D and 3-D Eulerian hydrocode capable of modeling high velocity compressible flow with multiple materials. The approach involved the use of gas gun experiments to evaluate the shock and multiple re-shock behavior of pressurized Argon gas to validate Pagosa simulations and the SESAME EOS. Additionally, the diagnostic capability within the experiments allowed for the EOS to be fully constrained with measured shock velocity, particle velocity and temperature. The simulations demonstrate excellent agreement with the experiments in the shock velocity/particle velocity space, but note unanticipated differences in the ionization front temperatures.

  19. Environmental probabilistic quantitative assessment methodologies

    USGS Publications Warehouse

    Crovelli, R.A.

    1995-01-01

    In this paper, four petroleum resource assessment methodologies are presented as possible pollution assessment methodologies, even though petroleum as a resource is desirable, whereas pollution is undesirable. A methodology is defined in this paper to consist of a probability model and a probabilistic method, where the method is used to solve the model. The following four basic types of probability models are considered: 1) direct assessment, 2) accumulation size, 3) volumetric yield, and 4) reservoir engineering. Three of the four petroleum resource assessment methodologies were written as microcomputer systems, viz. TRIAGG for direct assessment, APRAS for accumulation size, and FASPU for reservoir engineering. A fourth microcomputer system termed PROBDIST supports the three assessment systems. The three assessment systems have different probability models but the same type of probabilistic method. The type of advantages of the analytic method are in computational speed and flexibility, making it ideal for a microcomputer. -from Author

  20. Synaptic Computation Underlying Probabilistic Inference

    PubMed Central

    Soltani, Alireza; Wang, Xiao-Jing

    2010-01-01

    In this paper we propose that synapses may be the workhorse of neuronal computations that underlie probabilistic reasoning. We built a neural circuit model for probabilistic inference when information provided by different sensory cues needs to be integrated, and the predictive powers of individual cues about an outcome are deduced through experience. We found that bounded synapses naturally compute, through reward-dependent plasticity, the posterior probability that a choice alternative is correct given that a cue is presented. Furthermore, a decision circuit endowed with such synapses makes choices based on the summated log posterior odds and performs near-optimal cue combination. The model is validated by reproducing salient observations of, and provide insights into, a monkey experiment using a categorization task. Our model thus suggests a biophysical instantiation of the Bayesian decision rule, while predicting important deviations from it similar to ‘base-rate neglect’ observed in human studies when alternatives have unequal priors. PMID:20010823

  1. Probabilistic methods for rotordynamics analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.

    1991-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.

  2. Probabilistic Simulation for Nanocomposite Characterization

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Coroneos, Rula M.

    2007-01-01

    A unique probabilistic theory is described to predict the properties of nanocomposites. The simulation is based on composite micromechanics with progressive substructuring down to a nanoscale slice of a nanofiber where all the governing equations are formulated. These equations have been programmed in a computer code. That computer code is used to simulate uniaxial strengths properties of a mononanofiber laminate. The results are presented graphically and discussed with respect to their practical significance. These results show smooth distributions.

  3. Applications of Probabilistic Risk Assessment

    SciTech Connect

    Burns, K.J.; Chapman, J.R.; Follen, S.M.; O'Regan, P.J. )

    1991-05-01

    This report provides a summary of potential and actual applications of Probabilistic Risk Assessment (PRA) technology and insights. Individual applications are derived from the experiences of a number of US nuclear utilities. This report identifies numerous applications of PRA techniques beyond those typically associated with PRAs. In addition, believing that the future use of PRA techniques should not be limited to those of the past, areas of plant operations, maintenance, and financial resource allocation are discussed. 9 refs., 3 tabs.

  4. Molecular scale shock response: electronic absorption spectroscopy of laser shocked explosives

    NASA Astrophysics Data System (ADS)

    McGrane, Shawn; Whitley, Von; Moore, David; Bolme, Cindy; Eakins, Daniel

    2009-06-01

    Single shot spectroscopies are being employed to answer questions fundamental to shock initiation of explosives. The goals are to: 1) determine the extent to which electronic excitations are, or are not, involved in shock induced reactions, 2) test the multiphonon up-pumping hypothesis in explosives, and 3) provide data on the initial evolution of temperature and chemistry following the shock loading of explosives on scales amenable to comparison to molecular dynamics simulations. The data presented in this talk are focused on answering the first question. Recent experimental results measuring the time history of ultraviolet/visible absorption spectroscopy of laser shocked explosive thin films and single crystals will be discussed.

  5. Multiple shocks

    NASA Astrophysics Data System (ADS)

    Shenker, Stephen H.; Stanford, Douglas

    2014-12-01

    Using gauge/gravity duality, we explore a class of states of two CFTs with a large degree of entanglement, but with very weak local two-sided correlation. These states are constructed by perturbing the thermofield double state with thermal-scale operators that are local at different times. Acting on the dual black hole geometry, these perturbations create an intersecting network of shock waves, supporting a very long wormhole. Chaotic CFT dynamics and the associated fast scrambling time play an essential role in determining the qualitative features of the resulting geometries.

  6. The interaction between human initiation factor eIF3 subunit c and heat-shock protein 90: a necessary factor for translation mediated by the hepatitis C virus internal ribosome entry site.

    PubMed

    Ujino, Saneyuki; Nishitsuji, Hironori; Sugiyama, Ryuichi; Suzuki, Hitoshi; Hishiki, Takayuki; Sugiyama, Kazuo; Shimotohno, Kunitada; Takaku, Hiroshi

    2012-01-01

    Heat-shock protein 90 (Hsp90) is a molecular chaperone that plays a key role in the conformational maturation of various transcription factors and protein kinases in signal transduction. The hepatitis C virus (HCV) internal ribosome entry site (IRES) RNA drives translation by directly recruiting the 40S ribosomal subunits that bind to eukaryotic initiation factor 3 (eIF3). Our data indicate that Hsp90 binds indirectly to eIF3 subunit c by interacting with it through the HCV IRES RNA, and the functional consequence of this Hsp90-eIF3c-HCV-IRES RNA interaction is the prevention of ubiquitination and the proteasome-dependent degradation of eIF3c. Hsp90 activity interference by Hsp90 inhibitors appears to be the result of the dissociation of eIF3c from Hsp90 in the presence of HCV IRES RNA and the resultant induction of the degradation of the free forms of eIF3c. Moreover, the interaction between Hsp90 and eIF3c is dependent on HCV IRES RNA binding. Furthermore, we demonstrate, by knockdown of eIF3c, that the silencing of eIF3c results in inhibitory effects on translation of HCV-derived RNA but does not affect cap-dependent translation. These results indicate that the interaction between Hsp90 and eIF3c may play an important role in HCV IRES-mediated translation.

  7. Weak-shock theory for spherical shock waves

    SciTech Connect

    Curtis, W.D.; Rosenkilde, C.E.; Yee, K.S.

    1982-03-01

    We develop weak shock theory in a form which would allow us to utilize output from a hydrodynamic code (e.g. KOVEC) as either an initial or boundary condition and then follow the wave evolution to much greater distances than the codes themselves can attain.

  8. Shock Prevention

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The electrician pictured is installing a General Electric Ground Fault Interrupter (GFI), a device which provides protection against electrical shock in the home or in industrial facilities. Shocks due to defective wiring in home appliances or other electrical equipment can cause severe burns, even death. As a result, the National Electrical Code now requires GFIs in all new homes constructed. This particular type of GFI employs a sensing element which derives from technology acquired in space projects by SCI Systems, Inc., Huntsville, Alabama, producer of sensors for GE and other manufacturers of GFI equipment. The sensor is based on the company's experience in developing miniaturized circuitry for space telemetry and other spacecraft electrical systems; this experience enabled SCI to package interruptor circuitry in the extremely limited space available and to produce sensory devices at practicable cost. The tiny sensor measures the strength of the electrical current and detects current differentials that indicate a fault in the functioning of an electrical system. The sensing element then triggers a signal to a disconnect mechanism in the GFI, which cuts off the current in the faulty circuit.

  9. Probabilistic Aeroelastic Analysis of Turbomachinery Components

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Mital, S. K.; Stefko, G. L.

    2004-01-01

    A probabilistic approach is described for aeroelastic analysis of turbomachinery blade rows. Blade rows with subsonic flow and blade rows with supersonic flow with subsonic leading edge are considered. To demonstrate the probabilistic approach, the flutter frequency, damping and forced response of a blade row representing a compressor geometry is considered. The analysis accounts for uncertainties in structural and aerodynamic design variables. The results are presented in the form of probabilistic density function (PDF) and sensitivity factors. For subsonic flow cascade, comparisons are also made with different probabilistic distributions, probabilistic methods, and Monte-Carlo simulation. The approach shows that the probabilistic approach provides a more realistic and systematic way to assess the effect of uncertainties in design variables on the aeroelastic instabilities and response.

  10. Shock waves and nucleosynthesis in type II supernovae

    NASA Technical Reports Server (NTRS)

    Aufderheide, M. B.; Baron, E.; Thielemann, F.-K.

    1991-01-01

    In the study of nucleosynthesis in type II SN, shock waves are initiated artificially, since collapse calculations do not, as yet, give self-consistent shock waves strong enough to produce the SN explosion. The two initiation methods currently used by light-curve modelers are studied, with a focus on the peak temperatures and the nucleosynthetic yields in each method. The various parameters involved in artificially initiating a shock wave and the effects of varying these parameters are discussed.

  11. Stability of undercompressive shock profiles

    NASA Astrophysics Data System (ADS)

    Howard, Peter; Zumbrun, Kevin

    Using a simplified pointwise iteration scheme, we establish nonlinear phase-asymptotic orbital stability of large-amplitude Lax, undercompressive, overcompressive, and mixed under-overcompressive type shock profiles of strictly parabolic systems of conservation laws with respect to initial perturbations |u(x)|⩽E(1 in C, E sufficiently small, under the necessary conditions of spectral and hyperbolic stability together with transversality of the connecting profile. This completes the program initiated by Zumbrun and Howard in [K. Zumbrun, P. Howard, Pointwise semigroup methods and stability of viscous shock waves, Indiana Univ. Math. J. 47 (4) (1998) 741-871], extending to the general undercompressive case results obtained for Lax and overcompressive shock profiles in [A. Szepessy, Z. Xin, Nonlinear stability of viscous shock waves, Arch. Ration. Mech. Anal. 122 (1993) 53-103; T.-P. Liu, Pointwise convergence to shock waves for viscous conservation laws, Comm. Pure Appl. Math. 50 (11) (1997) 1113-1182; K. Zumbrun, P. Howard, Pointwise semigroup methods and stability of viscous shock waves, Indiana Univ. Math. J. 47 (4) (1998) 741-871; K. Zumbrun, Refined wave-tracking and nonlinear stability of viscous Lax shocks, Methods Appl. Anal. 7 (2000) 747-768; M.-R. Raoofi, L-asymptotic behavior of perturbed viscous shock profiles, thesis, Indiana Univ., 2004; C. Mascia, K. Zumbrun, Pointwise Green's function bounds and stability of relaxation shocks, Indiana Univ. Math. J. 51 (4) (2002) 773-904; C. Mascia, K. Zumbrun, Stability of small-amplitude shock profiles of symmetric hyperbolic-parabolic systems, Comm. Pure Appl. Math. 57 (7) (2004) 841-876; C. Mascia, K. Zumbrun, Pointwise Green's function bounds for shock profiles with degenerate viscosity, Arch. Ration. Mech. Anal. 169 (3) (2003) 177-263; C. Mascia, K. Zumbrun, Stability of large-amplitude shock profiles of hyperbolic-parabolic systems, Arch. Ration. Mech. Anal. 172 (1) (2004) 93-131; C. Mascia, K. Zumbrun

  12. Vascular Endothelium and Hypovolemic Shock.

    PubMed

    Gulati, Anil

    2016-01-01

    Endothelium is a site of metabolic activity and has a major reservoir of multipotent stem cells. It plays a vital role in the vascular physiological, pathophysiological and reparative processes. Endothelial functions are significantly altered following hypovolemic shock due to ischemia of the endothelial cells and by reperfusion due to resuscitation with fluids. Activation of endothelial cells leads to release of vasoactive substances (nitric oxide, endothelin, platelet activating factor, prostacyclin, mitochondrial N-formyl peptide), mediators of inflammation (tumor necrosis factor α, interleukins, interferons) and thrombosis. Endothelial cell apoptosis is induced following hypovolemic shock due to deprivation of oxygen required by endothelial cell mitochondria; this lack of oxygen initiates an increase in mitochondrial reactive oxygen species (ROS) and release of apoptogenic proteins. The glycocalyx structure of endothelium is compromised which causes an impairment of the protective endothelial barrier resulting in increased permeability and leakage of fluids in to the tissue causing edema. Growth factors such as angiopoetins and vascular endothelial growth factors also contribute towards pathophysiology of hypovolemic shock. Endothelium is extremely active with numerous functions, understanding these functions will provide novel targets to design therapeutic agents for the acute management of hypovolemic shock. Hypovolemic shock also occurs in conditions such as dengue shock syndrome and Ebola hemorrhagic fever, defining the role of endothelium in the pathophysiology of these conditions will provide greater insight regarding the functions of endothelial cells in vascular regulation.

  13. SHOCK-EXCITED OSCILLATOR

    DOEpatents

    Creveling, R.

    1957-12-17

    S> A shock-excited quartz crystal oscillator is described. The circuit was specifically designed for application in micro-time measuring work to provide an oscillator which immediately goes into oscillation upon receipt of a trigger pulse and abruptly ceases oscillation when a second pulse is received. To achieve the instant action, the crystal has a prestressing voltage applied across it. A monostable multivibrator receives the on and off trigger pulses and discharges a pulse through the crystal to initiate or terminate oscillation instantly.

  14. Shock compression dynamics under a microscope

    NASA Astrophysics Data System (ADS)

    Dlott, Dana D.

    2017-01-01

    Our laboratory has developed a tabletop laser miniflyer launcher used for a wide variety of studies in the physical and chemical sciences. The flyers, typically 0.7 mm in diameter, can be used to shock microgram quantities of interesting materials. Frequently 100 shock experiments per day are performed. A microscope objective transmits the photon Doppler velocimeter (PDV) flyer plate diagnostic and various laser beams, and collects signals from the shocked materials that can be transmitted to video cameras, spectrographs, streak cameras, etc. In this paper I describe the flyer plate apparatus and then discuss three recent efforts: (1) Shock dissipation in nanoporous media; (2) Probing micropressures in shocked microstructured media; and (3) Shock initiation of nanotechnology reactive materials.

  15. Motion of the heliospheric termination shock - A gas dynamic model

    NASA Technical Reports Server (NTRS)

    Barnes, Aaron

    1993-01-01

    A simple quantitative model is presented for the heliospheric termination shock's anticipated movement in response to upstream solar wind condition variations, under the assumption that the termination shock is initially a strong gasdynamic shock that is at rest relative to the sun, and that there is a discontinuous increase or decrease in the dynamical pressure upstream of the shock. The model suggests that the termination shock is constantly in motion, and that the mean position of the shock lies near the mean equilibrium position which corresponds to the balance between the mean solar wind dynamical pressure and the mean interstellar pressure.

  16. Acceleration of heavy ions by perpendicular collisionless shocks: Impact of the shock front nonstationarity

    NASA Astrophysics Data System (ADS)

    Yang, Z. W.; LembèGe, B.; Lu, Q. M.

    2011-10-01

    Both hybrid/full particle simulations and recent experimental results have clearly evidenced that the front of a supercritical quasi-perpendicular shock can be nonstationary. One responsible mechanism proposed for this nonstationarity is the self-reformation of the front itself being due to the accumulation of reflected ions. Important consequences of this nonstationarity are that not only the amplitude but also the spatial scales of fields components at the shock front (ramp and foot) are strongly varying within each cycle of the self-reformation. On the other hand, several studies have been made on the acceleration and heating of heavy ions but most have been restricted to a stationary shock profile only. Herein, one-dimensional test particle simulations based on shock profiles fields produced in PIC simulation are performed in order to investigate the impact of the shock front nonstationarity on heavy ion acceleration (He, O, Fe). Reflection and acceleration mechanisms of heavy ions (with different initial thermal velocities and different charge-mass ratios) interacting with a nonstationary shock front (self-reformation) are analyzed in detail. Present preliminary results show that: (1) the heavy ions suffer both shock drift acceleration (SDA) and shock surfing acceleration (SSA) mechanisms; (2) the fraction of reflected heavy ions increases with initial thermal velocity, charge-mass ratio and decreasing shock front width at both stationary shocks (situation equivalent to fixed shock cases) and nonstationary shocks (situation equivalent to continuously time-evolving shock cases); (3) the shock front nonstationarity (time-evolving shock case) facilitates the reflection of heavy ions; (4) a striking feature is the formation of an injected monoenergetic heavy ions population which persists in the shock front spectrum for different initial thermal velocities and ions species. The impact of the shock front nonstationarity on the heavy ions spectra within the shock

  17. Probabilistic Evaluation of Blade Impact Damage

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Abumeri, G. H.

    2003-01-01

    The response to high velocity impact of a composite blade is probabilistically evaluated. The evaluation is focused on quantifying probabilistically the effects of uncertainties (scatter) in the variables that describe the impact, the blade make-up (geometry and material), the blade response (displacements, strains, stresses, frequencies), the blade residual strength after impact, and the blade damage tolerance. The results of probabilistic evaluations results are in terms of probability cumulative distribution functions and probabilistic sensitivities. Results show that the blade has relatively low damage tolerance at 0.999 probability of structural failure and substantial at 0.01 probability.

  18. Expansion shock waves in regularized shallow-water theory

    NASA Astrophysics Data System (ADS)

    El, Gennady A.; Hoefer, Mark A.; Shearer, Michael

    2016-05-01

    We identify a new type of shock wave by constructing a stationary expansion shock solution of a class of regularized shallow-water equations that include the Benjamin-Bona-Mahony and Boussinesq equations. An expansion shock exhibits divergent characteristics, thereby contravening the classical Lax entropy condition. The persistence of the expansion shock in initial value problems is analysed and justified using matched asymptotic expansions and numerical simulations. The expansion shock's existence is traced to the presence of a non-local dispersive term in the governing equation. We establish the algebraic decay of the shock as it is gradually eroded by a simple wave on either side. More generally, we observe a robustness of the expansion shock in the presence of weak dissipation and in simulations of asymmetric initial conditions where a train of solitary waves is shed from one side of the shock.

  19. Role of molecular dynamics on descriptions of shock front processes

    NASA Astrophysics Data System (ADS)

    Karo, A. M.

    1981-07-01

    A computational approach, based on classical molecular dynamics, is used to form a realistic picture of shock induced processes occurring at the shock front and resulting from the detailed, violent motion associated with shock motion on an atomic scale. Prototype studies of phase transitions are discussed. The interaction of the shock front with defects, surfaces, voids, and inclusions, and across grain boundaries are summarized. The critical question of how mechanical energy imparted to a condensed material by shock loading is converted to the activation energy required to overcome some initial energy barrier in an initiation process, is addressed.

  20. Hemodynamic Analysis of Pediatric Septic Shock and Cardiogenic Shock Using Transpulmonary Thermodilution

    PubMed Central

    Lee, En-Pei; Hsia, Shao-Hsuan; Lin, Jainn-Jim; Chan, Oi-Wa; Lee, Jung; Lin, Chia-Ying

    2017-01-01

    Septic shock and cardiogenic shock are the two most common types of shock in children admitted to pediatric intensive care units (PICUs). The aim of the study was to investigate which hemodynamic variables were associated with mortality in children with shock. We retrospectively analyzed 50 children with shock (37 septic shock cases and 13 cardiogenic shock cases) in the PICU and monitored their hemodynamics using transpulmonary thermodilution from 2003 to 2016. Clinical factors were analyzed between the patients with septic and cardiogenic shock. In addition, hemodynamic parameters associated with mortality were analyzed. The 28-day mortality was significantly higher in the septic group than in the cardiogenic group (p = 0.016). Initially, the parameters of cardiac output and cardiac contractility were higher in the septic group (p < 0.05) while the parameters of preload and afterload were all higher in the cardiogenic group (p < 0.05). Cardiac index was significantly lower in the nonsurvivors of cardiogenic shock at the time of initial admission and after the first 24 hours (both p < 0.05), while systemic vascular resistance index (SVRI) was significantly lower in the nonsurvivors of septic shock (p < 0.001). Therefore, during the first 24 hours after intensive care, SVRI and cardiac index are the most important hemodynamic parameters associated with mortality.

  1. Probabilistic analysis of cascade failure dynamics in complex network

    NASA Astrophysics Data System (ADS)

    Zhang, Ding-Xue; Zhao, Dan; Guan, Zhi-Hong; Wu, Yonghong; Chi, Ming; Zheng, Gui-Lin

    2016-11-01

    The impact of initial load and tolerance parameter distribution on cascade failure is investigated. By using mean field theory, a probabilistic cascade failure model is established. Based on the model, the damage caused by certain attack size can be predicted, and the critical attack size is derived by the condition of cascade failure end, which ensures no collapse. The critical attack size is larger than the case of constant tolerance parameter for network of random distribution. Comparing three typical distributions, simulation results indicate that the network whose initial load and tolerance parameter both follow Weibull distribution performs better than others.

  2. Coherent Raman Studies of Shocked Liquids

    NASA Astrophysics Data System (ADS)

    McGrane, Shawn; Brown, Kathryn; Dang, Nhan; Bolme, Cynthia; Moore, David

    2013-06-01

    Transient vibrational spectroscopies offer the potential to directly observe time dependent shock induced chemical reaction kinetics. We report recent experiments that couple a hybrid picosecond/femtosecond coherent anti-Stokes Raman spectroscopy (CARS) diagnostic with our tabletop ultrafast laser driven shock platform. Initial results on liquids shocked to 20 GPa suggest that sub-picosecond dephasing at high pressure and temperature may limit the application of this nonresonant background free version of CARS. Initial results using interferometric CARS to increase sensitivity and overcome these limitations will be presented.

  3. Use of Probabilistic Risk Assessment in Shuttle Decision Making Process

    NASA Technical Reports Server (NTRS)

    Boyer, Roger L.; Hamlin, Teri, L.

    2011-01-01

    This slide presentation reviews the use of Probabilistic Risk Assessment (PRA) to assist in the decision making for the shuttle design and operation. Probabilistic Risk Assessment (PRA) is a comprehensive, structured, and disciplined approach to identifying and analyzing risk in complex systems and/or processes that seeks answers to three basic questions: (i.e., what can go wrong? what is the likelihood of these occurring? and what are the consequences that could result if these occur?) The purpose of the Shuttle PRA (SPRA) is to provide a useful risk management tool for the Space Shuttle Program (SSP) to identify strengths and possible weaknesses in the Shuttle design and operation. SPRA was initially developed to support upgrade decisions, but has evolved into a tool that supports Flight Readiness Reviews (FRR) and near real-time flight decisions. Examples of the use of PRA for the shuttle are reviewed.

  4. Development of Probabilistic Structural Analysis Integrated with Manufacturing Processes

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Nagpal, Vinod K.

    2007-01-01

    An effort has been initiated to integrate manufacturing process simulations with probabilistic structural analyses in order to capture the important impacts of manufacturing uncertainties on component stress levels and life. Two physics-based manufacturing process models (one for powdered metal forging and the other for annular deformation resistance welding) have been linked to the NESSUS structural analysis code. This paper describes the methodology developed to perform this integration including several examples. Although this effort is still underway, particularly for full integration of a probabilistic analysis, the progress to date has been encouraging and a software interface that implements the methodology has been developed. The purpose of this paper is to report this preliminary development.

  5. Mixed deterministic and probabilistic networks.

    PubMed

    Mateescu, Robert; Dechter, Rina

    2008-11-01

    The paper introduces mixed networks, a new graphical model framework for expressing and reasoning with probabilistic and deterministic information. The motivation to develop mixed networks stems from the desire to fully exploit the deterministic information (constraints) that is often present in graphical models. Several concepts and algorithms specific to belief networks and constraint networks are combined, achieving computational efficiency, semantic coherence and user-interface convenience. We define the semantics and graphical representation of mixed networks, and discuss the two main types of algorithms for processing them: inference-based and search-based. A preliminary experimental evaluation shows the benefits of the new model.

  6. Mixed deterministic and probabilistic networks

    PubMed Central

    Dechter, Rina

    2010-01-01

    The paper introduces mixed networks, a new graphical model framework for expressing and reasoning with probabilistic and deterministic information. The motivation to develop mixed networks stems from the desire to fully exploit the deterministic information (constraints) that is often present in graphical models. Several concepts and algorithms specific to belief networks and constraint networks are combined, achieving computational efficiency, semantic coherence and user-interface convenience. We define the semantics and graphical representation of mixed networks, and discuss the two main types of algorithms for processing them: inference-based and search-based. A preliminary experimental evaluation shows the benefits of the new model. PMID:20981243

  7. Probabilistic risk assessment: Number 219

    SciTech Connect

    Bari, R.A.

    1985-11-13

    This report describes a methodology for analyzing the safety of nuclear power plants. A historical overview of plants in the US is provided, and past, present, and future nuclear safety and risk assessment are discussed. A primer on nuclear power plants is provided with a discussion of pressurized water reactors (PWR) and boiling water reactors (BWR) and their operation and containment. Probabilistic Risk Assessment (PRA), utilizing both event-tree and fault-tree analysis, is discussed as a tool in reactor safety, decision making, and communications. (FI)

  8. Probabilistic approach to EMP assessment

    SciTech Connect

    Bevensee, R.M.; Cabayan, H.S.; Deadrick, F.J.; Martin, L.C.; Mensing, R.W.

    1980-09-01

    The development of nuclear EMP hardness requirements must account for uncertainties in the environment, in interaction and coupling, and in the susceptibility of subsystems and components. Typical uncertainties of the last two kinds are briefly summarized, and an assessment methodology is outlined, based on a probabilistic approach that encompasses the basic concepts of reliability. It is suggested that statements of survivability be made compatible with system reliability. Validation of the approach taken for simple antenna/circuit systems is performed with experiments and calculations that involve a Transient Electromagnetic Range, numerical antenna modeling, separate device failure data, and a failure analysis computer program.

  9. Probabilistic Simulation for Nanocomposite Fracture

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2010-01-01

    A unique probabilistic theory is described to predict the uniaxial strengths and fracture properties of nanocomposites. The simulation is based on composite micromechanics with progressive substructuring down to a nanoscale slice of a nanofiber where all the governing equations are formulated. These equations have been programmed in a computer code. That computer code is used to simulate uniaxial strengths and fracture of a nanofiber laminate. The results are presented graphically and discussed with respect to their practical significance. These results show smooth distributions from low probability to high.

  10. Toxic Shock Syndrome

    MedlinePlus

    ... burn to avoid getting a staph infection. Toxic shock syndrome treatment Because toxic shock syndrome gets worse quickly, you may be seriously ... toxic shock syndrome in a wound? Resources Toxic Shock Syndrome ... treatment, women's health Family Health, Women January 2017 Copyright © ...

  11. Probabilistic Cue Combination: Less Is More

    ERIC Educational Resources Information Center

    Yurovsky, Daniel; Boyer, Ty W.; Smith, Linda B.; Yu, Chen

    2013-01-01

    Learning about the structure of the world requires learning probabilistic relationships: rules in which cues do not predict outcomes with certainty. However, in some cases, the ability to track probabilistic relationships is a handicap, leading adults to perform non-normatively in prediction tasks. For example, in the "dilution effect,"…

  12. Error Discounting in Probabilistic Category Learning

    ERIC Educational Resources Information Center

    Craig, Stewart; Lewandowsky, Stephan; Little, Daniel R.

    2011-01-01

    The assumption in some current theories of probabilistic categorization is that people gradually attenuate their learning in response to unavoidable error. However, existing evidence for this error discounting is sparse and open to alternative interpretations. We report 2 probabilistic-categorization experiments in which we investigated error…

  13. Probabilistic analysis of tsunami hazards

    USGS Publications Warehouse

    Geist, E.L.; Parsons, T.

    2006-01-01

    Determining the likelihood of a disaster is a key component of any comprehensive hazard assessment. This is particularly true for tsunamis, even though most tsunami hazard assessments have in the past relied on scenario or deterministic type models. We discuss probabilistic tsunami hazard analysis (PTHA) from the standpoint of integrating computational methods with empirical analysis of past tsunami runup. PTHA is derived from probabilistic seismic hazard analysis (PSHA), with the main difference being that PTHA must account for far-field sources. The computational methods rely on numerical tsunami propagation models rather than empirical attenuation relationships as in PSHA in determining ground motions. Because a number of source parameters affect local tsunami runup height, PTHA can become complex and computationally intensive. Empirical analysis can function in one of two ways, depending on the length and completeness of the tsunami catalog. For site-specific studies where there is sufficient tsunami runup data available, hazard curves can primarily be derived from empirical analysis, with computational methods used to highlight deficiencies in the tsunami catalog. For region-wide analyses and sites where there are little to no tsunami data, a computationally based method such as Monte Carlo simulation is the primary method to establish tsunami hazards. Two case studies that describe how computational and empirical methods can be integrated are presented for Acapulco, Mexico (site-specific) and the U.S. Pacific Northwest coastline (region-wide analysis).

  14. Software for Probabilistic Risk Reduction

    NASA Technical Reports Server (NTRS)

    Hensley, Scott; Michel, Thierry; Madsen, Soren; Chapin, Elaine; Rodriguez, Ernesto

    2004-01-01

    A computer program implements a methodology, denoted probabilistic risk reduction, that is intended to aid in planning the development of complex software and/or hardware systems. This methodology integrates two complementary prior methodologies: (1) that of probabilistic risk assessment and (2) a risk-based planning methodology, implemented in a prior computer program known as Defect Detection and Prevention (DDP), in which multiple requirements and the beneficial effects of risk-mitigation actions are taken into account. The present methodology and the software are able to accommodate both process knowledge (notably of the efficacy of development practices) and product knowledge (notably of the logical structure of a system, the development of which one seeks to plan). Estimates of the costs and benefits of a planned development can be derived. Functional and non-functional aspects of software can be taken into account, and trades made among them. It becomes possible to optimize the planning process in the sense that it becomes possible to select the best suite of process steps and design choices to maximize the expectation of success while remaining within budget.

  15. RECOLLIMATION SHOCKS IN MAGNETIZED RELATIVISTIC JETS

    SciTech Connect

    Mizuno, Yosuke; Rezzolla, Luciano; Gómez, Jose L.; Nishikawa, Ken-Ichi; Meli, Athina; Hardee, Philip E.

    2015-08-10

    We have performed two-dimensional special-relativistic magnetohydrodynamic simulations of non-equilibrium over-pressured relativistic jets in cylindrical geometry. Multiple stationary recollimation shock and rarefaction structures are produced along the jet by the nonlinear interaction of shocks and rarefaction waves excited at the interface between the jet and the surrounding ambient medium. Although initially the jet is kinematically dominated, we have considered axial, toroidal, and helical magnetic fields to investigate the effects of different magnetic-field topologies and strengths on the recollimation structures. We find that an axial field introduces a larger effective gas pressure and leads to stronger recollimation shocks and rarefactions, resulting in larger flow variations. The jet boost grows quadratically with the initial magnetic field. On the other hand, a toroidal field leads to weaker recollimation shocks and rarefactions, significantly modifying the jet structure after the first recollimation rarefaction and shock. The jet boost decreases systematically. For a helical field, instead, the behavior depends on the magnetic pitch, with a phenomenology that ranges between the one seen for axial and toroidal magnetic fields, respectively. In general, however, a helical magnetic field yields a more complex shock and rarefaction substructure close to the inlet that significantly modifies the jet structure. The differences in shock structure resulting from different field configurations and strengths may have observable consequences for disturbances propagating through a stationary recollimation shock.

  16. Recollimation Shocks in Magnetized Relativistic Jets

    NASA Astrophysics Data System (ADS)

    Mizuno, Yosuke; Gómez, Jose L.; Nishikawa, Ken-Ichi; Meli, Athina; Hardee, Philip E.; Rezzolla, Luciano

    2015-08-01

    We have performed two-dimensional special-relativistic magnetohydrodynamic simulations of non-equilibrium over-pressured relativistic jets in cylindrical geometry. Multiple stationary recollimation shock and rarefaction structures are produced along the jet by the nonlinear interaction of shocks and rarefaction waves excited at the interface between the jet and the surrounding ambient medium. Although initially the jet is kinematically dominated, we have considered axial, toroidal, and helical magnetic fields to investigate the effects of different magnetic-field topologies and strengths on the recollimation structures. We find that an axial field introduces a larger effective gas pressure and leads to stronger recollimation shocks and rarefactions, resulting in larger flow variations. The jet boost grows quadratically with the initial magnetic field. On the other hand, a toroidal field leads to weaker recollimation shocks and rarefactions, significantly modifying the jet structure after the first recollimation rarefaction and shock. The jet boost decreases systematically. For a helical field, instead, the behavior depends on the magnetic pitch, with a phenomenology that ranges between the one seen for axial and toroidal magnetic fields, respectively. In general, however, a helical magnetic field yields a more complex shock and rarefaction substructure close to the inlet that significantly modifies the jet structure. The differences in shock structure resulting from different field configurations and strengths may have observable consequences for disturbances propagating through a stationary recollimation shock.

  17. Shock experiments on maskelynite-bearing anorthosite

    NASA Technical Reports Server (NTRS)

    Lambert, P.; Grieve, R. A. F.

    1984-01-01

    A series of shock recovery experiments over 9.9-60.4 GPa have been carried out on naturally shocked anorthosite from the Mistastin impact structure in Labrador consisting primarily of diaplectic plagioclase glass or maskelynite, An(50), and pyroxene. Petrographic observations of the experimental products indicate that the component minerals and diaplectic glasses generally retained their initial character throughout, the only exception being the increase in fracturing which occurred in the 9.9 GPa shot. Reshocking at pressures higher than the initial shock tends to lower the refractive index of maskelynite. The increase in refractive index of maskelynite reshocked to pressures lower than the initial pressure is interpreted as due to shock densification of the diaplectic glass above the Hugoniot elastic limit and below the mixed phase regime. The data suggest that the low-high-low density transition of maskelynite occurs about 8 GPa below that of the crystal of corresponding composition.

  18. Is probabilistic evidence a source of knowledge?

    PubMed

    Friedman, Ori; Turri, John

    2015-07-01

    We report a series of experiments examining whether people ascribe knowledge for true beliefs based on probabilistic evidence. Participants were less likely to ascribe knowledge for beliefs based on probabilistic evidence than for beliefs based on perceptual evidence (Experiments 1 and 2A) or testimony providing causal information (Experiment 2B). Denial of knowledge for beliefs based on probabilistic evidence did not arise because participants viewed such beliefs as unjustified, nor because such beliefs leave open the possibility of error. These findings rule out traditional philosophical accounts for why probabilistic evidence does not produce knowledge. The experiments instead suggest that people deny knowledge because they distrust drawing conclusions about an individual based on reasoning about the population to which it belongs, a tendency previously identified by "judgment and decision making" researchers. Consistent with this, participants were more willing to ascribe knowledge for beliefs based on probabilistic evidence that is specific to a particular case (Experiments 3A and 3B).

  19. Survival of carbon grains in shocks

    NASA Technical Reports Server (NTRS)

    Seab, C. Gregory

    1990-01-01

    Supernova shocks play a significant part in the life of an interstellar grain. In a typical 10 to the 9th power year lifetime, a grain will be hit by an average of 10 shocks of 100 km s(sup -1) or greater velocity, and even more shocks of lower velocity. Evaluation of the results of this frequent shock processing is complicated by a number of uncertainties, but seems to give about 10 percent destruction of silicate grains and about half that for graphite grains. Because of the frequency of shocking, the mineralogy and sizes of the grain population is predominately determined by shock processing effects, and not by the initial grain nucleation and growth environment. One consequence of the significant role played by interstellar shocks is that a certain fraction (up to 5 percent) of the carbon should be transformed into the diamond phase. Diamond transformation is observed in the laboratory at threshold shock pressures easily obtainable in grain-grain collisions in supernova shocks. Yields for transforming graphite, amorphous carbon, glassy carbon, and other nearly pure carbon solids into diamond are quite high. Impurities up to at least the 10 percent level (for oxygen) are tolerated in the process. The typical size diamond expected from shock transformation agrees well with the observed sizes in the Lewis et al. findings in meteoritic material. Isotropic anomalies already contained in the grain are likely to be retained through the conversion process, while others may be implanted by the shock if the grain is close to the supernova. The meteoritic diamonds are likely to be the results of transformation of carbon grains in grain-grain collisions in supernova shock waves.

  20. Transient shocks beyond the heliopause

    DOE PAGES

    Fermo, R. L.; Pogorelov, N. V.; Burlaga, L. F.

    2015-09-30

    The heliopause is a rich, dynamic surface affected by the time-dependent solar wind. Stream interactions due to coronal mass ejections (CMEs), corotating interaction regions (CIRs), and other transient phenomena are known to merge producing global merged interaction regions (GMIRs). Numerical simulations of the solar wind interaction with the local interstellar medium (LISM) show that GMIRs, as well other time-dependent structures in the solar wind, may produce compression/rarefaction waves and shocks in the LISM behind the heliopause. These shocks may initiate wave activity observed by the Voyager spacecraft. The magnetometer onboard Voyager 1 indeed observed a few structures that may bemore » interpreted as shocks. We present numerical simulations of such shocks in the year of 2000, when both Voyager spacecraft were in the supersonic solar wind region, and in 2012, when Voyager 1 observed traveling shocks. In the former case, Voyager observations themselves provide time- dependent boundary conditions in the solar wind. In the latter case, we use OMNI data at 1 AU to analyze the plasma and magnetic field behavior after Voyager 1 crossed the heliospheric boundary. Numerical results are compared with spacecraft observations.« less

  1. Transient shocks beyond the heliopause

    SciTech Connect

    Fermo, R. L.; Pogorelov, N. V.; Burlaga, L. F.

    2015-09-30

    The heliopause is a rich, dynamic surface affected by the time-dependent solar wind. Stream interactions due to coronal mass ejections (CMEs), corotating interaction regions (CIRs), and other transient phenomena are known to merge producing global merged interaction regions (GMIRs). Numerical simulations of the solar wind interaction with the local interstellar medium (LISM) show that GMIRs, as well other time-dependent structures in the solar wind, may produce compression/rarefaction waves and shocks in the LISM behind the heliopause. These shocks may initiate wave activity observed by the Voyager spacecraft. The magnetometer onboard Voyager 1 indeed observed a few structures that may be interpreted as shocks. We present numerical simulations of such shocks in the year of 2000, when both Voyager spacecraft were in the supersonic solar wind region, and in 2012, when Voyager 1 observed traveling shocks. In the former case, Voyager observations themselves provide time- dependent boundary conditions in the solar wind. In the latter case, we use OMNI data at 1 AU to analyze the plasma and magnetic field behavior after Voyager 1 crossed the heliospheric boundary. Numerical results are compared with spacecraft observations.

  2. Formulation of probabilistic models of protein structure in atomic detail using the reference ratio method.

    PubMed

    Valentin, Jan B; Andreetta, Christian; Boomsma, Wouter; Bottaro, Sandro; Ferkinghoff-Borg, Jesper; Frellsen, Jes; Mardia, Kanti V; Tian, Pengfei; Hamelryck, Thomas

    2014-02-01

    We propose a method to formulate probabilistic models of protein structure in atomic detail, for a given amino acid sequence, based on Bayesian principles, while retaining a close link to physics. We start from two previously developed probabilistic models of protein structure on a local length scale, which concern the dihedral angles in main chain and side chains, respectively. Conceptually, this constitutes a probabilistic and continuous alternative to the use of discrete fragment and rotamer libraries. The local model is combined with a nonlocal model that involves a small number of energy terms according to a physical force field, and some information on the overall secondary structure content. In this initial study we focus on the formulation of the joint model and the evaluation of the use of an energy vector as a descriptor of a protein's nonlocal structure; hence, we derive the parameters of the nonlocal model from the native structure without loss of generality. The local and nonlocal models are combined using the reference ratio method, which is a well-justified probabilistic construction. For evaluation, we use the resulting joint models to predict the structure of four proteins. The results indicate that the proposed method and the probabilistic models show considerable promise for probabilistic protein structure prediction and related applications.

  3. Application of probabilistic fracture mechanics to the PTS issue

    SciTech Connect

    Cheverton, R.D.; Ball, D.G.

    1985-01-01

    As a part of the NRC effort to obtain a resolution to the PWR PTS issue, a probabilistic approach has been applied that includes a probabilistic fracture-mechanics (PFM) analysis. The PFM analysis is performed with OCA-P, a computer code that performs thermal, stress and fracture-mechanics analyses and estimates the conditional probability of vessel failure, P(F/E), using Monte Carlo techniques. The stress intensity factor (K/sub I/) is calculated for two- and three-dimensional surface flaws using superposition techniques and influence coefficients. Importance-sampling techniques are used, as necessary, to limit to a reasonable value the number of vessels actually calculated. Analyses of three PWR plants indicate that (1) the critical initial flaw depth is very small (5 to 15 mm), (2) the benefit of warm prestressing and the role of crack arrest are transient dependent, (3) crack arrest does not occur for the dominant transients, and (4) the single largest uncertainty in the overall probabilistic analysis is the number of surface flaws per vessel. 30 refs., 6 figs., 4 tabs.

  4. The properties of probabilistic simple regular sticker system

    NASA Astrophysics Data System (ADS)

    Selvarajoo, Mathuri; Fong, Wan Heng; Sarmin, Nor Haniza; Turaev, Sherzod

    2015-10-01

    A mathematical model for DNA computing using the recombination behavior of DNA molecules, known as a sticker system, has been introduced in 1998. In sticker system, the sticker operation is based on the Watson-Crick complementary feature of DNA molecules. The computation of sticker system starts from an incomplete double-stranded sequence. Then by iterative sticking operations, a complete double-stranded sequence is obtained. It is known that sticker systems with finite sets of axioms and sticker rule (including the simple regular sticker system) generate only regular languages. Hence, different types of restrictions have been considered to increase the computational power of the languages generated by the sticker systems. In this paper, we study the properties of probabilistic simple regular sticker systems. In this variant of sticker system, probabilities are associated with the axioms, and the probability of a generated string is computed by multiplying the probabilities of all occurrences of the initial strings. The language are selected according to some probabilistic requirements. We prove that the probabilistic enhancement increases the computational power of simple regular sticker systems.

  5. Shock-induced termination of reentrant cardiac arrhythmias: Comparing monophasic and biphasic shock protocols

    PubMed Central

    Bragard, Jean; Simic, Ana; Elorza, Jorge; Grigoriev, Roman O.; Cherry, Elizabeth M.; Gilmour, Robert F.; Otani, Niels F.; Fenton, Flavio H.

    2013-01-01

    In this article, we compare quantitatively the efficiency of three different protocols commonly used in commercial defibrillators. These are based on monophasic and both symmetric and asymmetric biphasic shocks. A numerical one–dimensional model of cardiac tissue using the bidomain formulation is used in order to test the different protocols. In particular, we performed a total of 4.8 × 106 simulations by varying shock waveform, shock energy, initial conditions, and heterogeneity in internal electrical conductivity. Whenever the shock successfully removed the reentrant dynamics in the tissue, we classified the mechanism. The analysis of the numerical data shows that biphasic shocks are significantly more efficient (by about 25%) than the corresponding monophasic ones. We determine that the increase in efficiency of the biphasic shocks can be explained by the higher proportion of newly excited tissue through the mechanism of direct activation. PMID:24387558

  6. Shock-induced termination of reentrant cardiac arrhythmias: Comparing monophasic and biphasic shock protocols

    NASA Astrophysics Data System (ADS)

    Bragard, Jean; Simic, Ana; Elorza, Jorge; Grigoriev, Roman O.; Cherry, Elizabeth M.; Gilmour, Robert F.; Otani, Niels F.; Fenton, Flavio H.

    2013-12-01

    In this article, we compare quantitatively the efficiency of three different protocols commonly used in commercial defibrillators. These are based on monophasic and both symmetric and asymmetric biphasic shocks. A numerical one-dimensional model of cardiac tissue using the bidomain formulation is used in order to test the different protocols. In particular, we performed a total of 4.8 × 106 simulations by varying shock waveform, shock energy, initial conditions, and heterogeneity in internal electrical conductivity. Whenever the shock successfully removed the reentrant dynamics in the tissue, we classified the mechanism. The analysis of the numerical data shows that biphasic shocks are significantly more efficient (by about 25%) than the corresponding monophasic ones. We determine that the increase in efficiency of the biphasic shocks can be explained by the higher proportion of newly excited tissue through the mechanism of direct activation.

  7. Shock-induced termination of reentrant cardiac arrhythmias: Comparing monophasic and biphasic shock protocols

    SciTech Connect

    Bragard, Jean Simic, Ana; Elorza, Jorge; Grigoriev, Roman O.; Fenton, Flavio H.; Cherry, Elizabeth M.; Gilmour, Robert F.; Otani, Niels F.

    2013-12-15

    In this article, we compare quantitatively the efficiency of three different protocols commonly used in commercial defibrillators. These are based on monophasic and both symmetric and asymmetric biphasic shocks. A numerical one–dimensional model of cardiac tissue using the bidomain formulation is used in order to test the different protocols. In particular, we performed a total of 4.8 × 10{sup 6} simulations by varying shock waveform, shock energy, initial conditions, and heterogeneity in internal electrical conductivity. Whenever the shock successfully removed the reentrant dynamics in the tissue, we classified the mechanism. The analysis of the numerical data shows that biphasic shocks are significantly more efficient (by about 25%) than the corresponding monophasic ones. We determine that the increase in efficiency of the biphasic shocks can be explained by the higher proportion of newly excited tissue through the mechanism of direct activation.

  8. Endocrinology of shock.

    PubMed

    Woolf, P D

    1986-12-01

    The development of shock initiates a cascade of responses in an effort to reestablish homeostasis. Three of the most important hormonal and neurohumoral changes are the secretion of glucocorticoids, catecholamines, and vasopressin. Regulation of adrenal function is much more complex than originally thought. Hemorrhage is a potent stimulus for cortisol release, and both ACTH and ACTH-independent mechanisms have been described. The ACTH response to its releasing hormone, corticotropin releasing hormone (CRF), is itself amplified by vasopressin, which appears to have intrinsic CRF properties. Because ACTH is synthesized as part of a large precursor molecule (pro-opiomelanocortin) containing the amino acid sequences for several important proteins, stimulation of ACTH release has far-ranging effects, the specifics of which are just being clarified. Norepinephrine and epinephrine levels increase manyfold above baseline within minutes of the onset of hemorrhagic shock. Only patients experiencing cardiac arrest or the rare patient with a very active pheochromocytoma have higher concentrations. The levels reached are far in excess of those required to cause both cardiovascular and metabolic alterations. Because of the presence of the endogenous opiates leucine and methionine enkephalin in the neurosecretory granule, it is very likely that the enkephalins are coreleased with the catecholamines, modifying their cardiovascular effects and producing analgesia. Hypovolemia is also a potent stimulus for vasopressin secretion, which overrides hypotonicity, presenting a clinical picture quite compatible with the syndrome of inappropriate antidiuretic hormone secretion, from which it must be differentiated. Vasopressin also is released by pain, nausea, and hypoxia, all of which are likely to be present in the patient with shock.(ABSTRACT TRUNCATED AT 250 WORDS)

  9. Avoidance based on shock intensity reduction with no change in shock probability.

    PubMed

    Bersh, P J; Alloy, L B

    1978-11-01

    Rats were trained on a free-operant avoidance procedure in which shock intensity was controlled by interresponse time. Shocks were random at a density of about 10 shocks per minute. Shock probability was response independent. As long as interresponse times remained less than the limit in effect, any shocks received were at the lower of two intensities (0.75 mA). Whenever interresponse times exceeded the limit, any shocks received were at the higher intensity (1.6 mA). The initial limit of 15 seconds was decreased in 3-second steps to either 6 or 3 seconds. All animals lever pressed to avoid higher intensity shock. As the interresponse time limit was reduced, the response rate during the lower intensity shock and the proportion of brief interresponse times increased. Substantial warmup effects were evident, particularly at the shorter interresponse-time limits. Shock intensity reduction without change in shock probability was effective in the acquisition and maintenance of avoidance responding, as well as in differentiation of interresponse times. This research suggests limitations on the generality of a safety signal interpretation of avoidance conditioning.

  10. Shock desensitizing of solid explosive

    SciTech Connect

    Davis, William C

    2010-01-01

    Solid explosive can be desensitized by a shock wave too weak to initiate it promptly, and desensitized explosive does not react although its chemical composition is almost unchanged. A strong second shock does not cause reaction until it overtakes the first shock. The first shock, if it is strong enough, accelerates very slowly at first, and then more rapidly as detonation approaches. These facts suggest that there are two competing reactions. One is the usual explosive goes to products with the release of energy, and the other is explosive goes to dead explosive with no chemical change and no energy release. The first reaction rate is very sensitive to the local state, and the second is only weakly so. At low pressure very little energy is released and the change to dead explosive dominates. At high pressure, quite the other way, most of the explosive goes to products. Numerous experiments in both the initiation and the full detonation regimes are discussed and compared in testing these ideas.

  11. Probabilistic Reasoning for Plan Robustness

    NASA Technical Reports Server (NTRS)

    Schaffer, Steve R.; Clement, Bradley J.; Chien, Steve A.

    2005-01-01

    A planning system must reason about the uncertainty of continuous variables in order to accurately project the possible system state over time. A method is devised for directly reasoning about the uncertainty in continuous activity duration and resource usage for planning problems. By representing random variables as parametric distributions, computing projected system state can be simplified in some cases. Common approximation and novel methods are compared for over-constrained and lightly constrained domains. The system compares a few common approximation methods for an iterative repair planner. Results show improvements in robustness over the conventional non-probabilistic representation by reducing the number of constraint violations witnessed by execution. The improvement is more significant for larger problems and problems with higher resource subscription levels but diminishes as the system is allowed to accept higher risk levels.

  12. Probabilistic cloning of equidistant states

    SciTech Connect

    Jimenez, O.; Roa, Luis; Delgado, A.

    2010-08-15

    We study the probabilistic cloning of equidistant states. These states are such that the inner product between them is a complex constant or its conjugate. Thereby, it is possible to study their cloning in a simple way. In particular, we are interested in the behavior of the cloning probability as a function of the phase of the overlap among the involved states. We show that for certain families of equidistant states Duan and Guo's cloning machine leads to cloning probabilities lower than the optimal unambiguous discrimination probability of equidistant states. We propose an alternative cloning machine whose cloning probability is higher than or equal to the optimal unambiguous discrimination probability for any family of equidistant states. Both machines achieve the same probability for equidistant states whose inner product is a positive real number.

  13. Probabilistic direct counterfactual quantum communication

    NASA Astrophysics Data System (ADS)

    Zhang, Sheng

    2017-02-01

    It is striking that the quantum Zeno effect can be used to launch a direct counterfactual communication between two spatially separated parties, Alice and Bob. So far, existing protocols of this type only provide a deterministic counterfactual communication service. However, this counterfactuality should be payed at a price. Firstly, the transmission time is much longer than a classical transmission costs. Secondly, the chained-cycle structure makes them more sensitive to channel noises. Here, we extend the idea of counterfactual communication, and present a probabilistic-counterfactual quantum communication protocol, which is proved to have advantages over the deterministic ones. Moreover, the presented protocol could evolve to a deterministic one solely by adjusting the parameters of the beam splitters. Project supported by the National Natural Science Foundation of China (Grant No. 61300203).

  14. Probabilistic Fatigue Damage Program (FATIG)

    NASA Technical Reports Server (NTRS)

    Michalopoulos, Constantine

    2012-01-01

    FATIG computes fatigue damage/fatigue life using the stress rms (root mean square) value, the total number of cycles, and S-N curve parameters. The damage is computed by the following methods: (a) traditional method using Miner s rule with stress cycles determined from a Rayleigh distribution up to 3*sigma; and (b) classical fatigue damage formula involving the Gamma function, which is derived from the integral version of Miner's rule. The integration is carried out over all stress amplitudes. This software solves the problem of probabilistic fatigue damage using the integral form of the Palmgren-Miner rule. The software computes fatigue life using an approach involving all stress amplitudes, up to N*sigma, as specified by the user. It can be used in the design of structural components subjected to random dynamic loading, or by any stress analyst with minimal training for fatigue life estimates of structural components.

  15. Development of probabilistic multimedia multipathway computer codes.

    SciTech Connect

    Yu, C.; LePoire, D.; Gnanapragasam, E.; Arnish, J.; Kamboj, S.; Biwer, B. M.; Cheng, J.-J.; Zielen, A. J.; Chen, S. Y.; Mo, T.; Abu-Eid, R.; Thaggard, M.; Sallo, A., III.; Peterson, H., Jr.; Williams, W. A.; Environmental Assessment; NRC; EM

    2002-01-01

    The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributions for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.

  16. Probabilistic Structural Evaluation of Uncertainties in Radiator Sandwich Panel Design

    NASA Technical Reports Server (NTRS)

    Kuguoglu, Latife; Ludwiczak, Damian

    2006-01-01

    The Jupiter Icy Moons Orbiter (JIMO) Space System is part of the NASA's Prometheus Program. As part of the JIMO engineering team at NASA Glenn Research Center, the structural design of the JIMO Heat Rejection Subsystem (HRS) is evaluated. An initial goal of this study was to perform sensitivity analyses to determine the relative importance of the input variables on the structural responses of the radiator panel. The desire was to let the sensitivity analysis information identify the important parameters. The probabilistic analysis methods illustrated here support this objective. The probabilistic structural performance evaluation of a HRS radiator sandwich panel was performed. The radiator panel structural performance was assessed in the presence of uncertainties in the loading, fabrication process variables, and material properties. The stress and displacement contours of the deterministic structural analysis at mean probability was performed and results presented. It is followed by a probabilistic evaluation to determine the effect of the primitive variables on the radiator panel structural performance. Based on uncertainties in material properties, structural geometry and loading, the results of the displacement and stress analysis are used as an input file for the probabilistic analysis of the panel. The sensitivity of the structural responses, such as maximum displacement and maximum tensile and compressive stresses of the facesheet in x and y directions and maximum VonMises stresses of the tube, to the loading and design variables is determined under the boundary condition where all edges of the radiator panel are pinned. Based on this study, design critical material and geometric parameters of the considered sandwich panel are identified.

  17. Development of Probabilistic Methods to Assess Meteotsunami Hazards

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Ten Brink, U. S.

    2014-12-01

    A probabilistic method to assess the hazard from meteotsunamis is developed from both probabilistic tsunami hazard analysis (PTHA) and probabilistic storm-surge forecasting. Meteotsunamis are unusual sea level events, generated when the speed of an atmospheric pressure or wind disturbance is comparable to the phase speed of long waves in the ocean. A general aggregation equation, similar to that used in PTHA, incorporates different meteotsunami sources. A historical record of 116 pressure disturbances recorded between 2000 and 2013 by the U.S. Automated Surface Observing Stations (ASOS) along the U.S. East Coast is used to establish a continuous analytic distribution of each source parameter as well as the overall Poisson rate of occurrence. Initially, atmospheric parameters are considered independently such that the joint probability distribution is given by the product of each marginal distribution. The probabilistic equations are implemented using a Monte Carlo scheme, where a synthetic catalog of pressure disturbances is compiled by sampling the parameter distributions. For each entry in the catalog, ocean wave amplitudes are computed using a finite-difference hydrodynamic model that solves for the linearized long-wave equations. Aggregation of the results from the Monte Carlo scheme results in a meteotsunami hazard curve that plots the annualized rate of exceedance with respect to maximum event amplitude for a particular location along the coast. Results from using 20 synthetic catalogs of 116 events each, resampled from the parent parameter distributions, yield mean and quantile hazard curves. An example is presented for four Mid-Atlantic sites using ASOS data in which only atmospheric pressure disturbances from squall lines and derechos are considered. Results indicate that site-to-site variations among meteotsunami hazard curves are related to the geometry and width of the adjacent continental shelf. The new hazard analysis of meteotsunamis is important for

  18. Reclassifying the spectrum of septic patients using lactate: severe sepsis, cryptic shock, vasoplegic shock and dysoxic shock

    PubMed Central

    Ranzani, Otavio Tavares; Monteiro, Mariana Barbosa; Ferreira, Elaine Maria; Santos, Sergio Ricardo; Machado, Flavia Ribeiro; Noritomi, Danilo Teixeira

    2013-01-01

    Objective The current definition of severe sepsis and septic shock includes a heterogeneous profile of patients. Although the prognostic value of hyperlactatemia is well established, hyperlactatemia is observed in patients with and without shock. The present study aimed to compare the prognosis of septic patients by stratifying them according to two factors: hyperlactatemia and persistent hypotension. Methods The present study is a secondary analysis of an observational study conducted in ten hospitals in Brazil (Rede Amil - SP). Septic patients with initial lactate measurements in the first 6 hours of diagnosis were included and divided into 4 groups according to hyperlactatemia (lactate >4mmol/L) and persistent hypotension: (1) severe sepsis (without both criteria); (2) cryptic shock (hyperlactatemia without persistent hypotension); (3) vasoplegic shock (persistent hypotension without hyperlactatemia); and (4) dysoxic shock (both criteria). Results In total, 1,948 patients were analyzed, and the sepsis group represented 52% of the patients, followed by 28% with vasoplegic shock, 12% with dysoxic shock and 8% with cryptic shock. Survival at 28 days differed among the groups (p<0.001). Survival was highest among the severe sepsis group (69%, p<0.001 versus others), similar in the cryptic and vasoplegic shock groups (53%, p=0.39), and lowest in the dysoxic shock group (38%, p<0.001 versus others). In the adjusted analysis, the survival at 28 days remained different among the groups (p<0.001) and the dysoxic shock group exhibited the highest hazard ratio (HR=2.99, 95%CI 2.21-4.05). Conclusion The definition of sepsis includes four different profiles if we consider the presence of hyperlactatemia. Further studies are needed to better characterize septic patients, to understand the etiology and to design adequate targeted treatments. PMID:24553507

  19. Shock waves and shock tubes; Proceedings of the Fifteenth International Symposium, Berkeley, CA, July 28-August 2, 1985

    NASA Technical Reports Server (NTRS)

    Bershader, D. (Editor); Hanson, R. (Editor)

    1986-01-01

    A detailed survey is presented of shock tube experiments, theoretical developments, and applications being carried out worldwide. The discussions explore shock tube physics and the related chemical, physical and biological science and technology. Extensive attention is devoted to shock wave phenomena in dusty gases and other multiphase and heterogeneous systems, including chemically reactive mixtures. Consideration is given to techniques for measuring, visualizing and theoretically modeling flowfield, shock wave and rarefaction wave characteristics. Numerical modeling is explored in terms of the application of computational fluid dynamics techniques to describing flowfields in shock tubes. Shock interactions and propagation, in both solids, fluids, gases and mixed media are investigated, along with the behavior of shocks in condensed matter. Finally, chemical reactions that are initiated as the result of passage of a shock wave are discussed, together with methods of controlling the evolution of laminar separated flows at concave corners on advanced reentry vehicles.

  20. A Probabilistic Asteroid Impact Risk Model

    NASA Technical Reports Server (NTRS)

    Mathias, Donovan L.; Wheeler, Lorien F.; Dotson, Jessie L.

    2016-01-01

    Asteroid threat assessment requires the quantification of both the impact likelihood and resulting consequence across the range of possible events. This paper presents a probabilistic asteroid impact risk (PAIR) assessment model developed for this purpose. The model incorporates published impact frequency rates with state-of-the-art consequence assessment tools, applied within a Monte Carlo framework that generates sets of impact scenarios from uncertain parameter distributions. Explicit treatment of atmospheric entry is included to produce energy deposition rates that account for the effects of thermal ablation and object fragmentation. These energy deposition rates are used to model the resulting ground damage, and affected populations are computed for the sampled impact locations. The results for each scenario are aggregated into a distribution of potential outcomes that reflect the range of uncertain impact parameters, population densities, and strike probabilities. As an illustration of the utility of the PAIR model, the results are used to address the question of what minimum size asteroid constitutes a threat to the population. To answer this question, complete distributions of results are combined with a hypothetical risk tolerance posture to provide the minimum size, given sets of initial assumptions. Model outputs demonstrate how such questions can be answered and provide a means for interpreting the effect that input assumptions and uncertainty can have on final risk-based decisions. Model results can be used to prioritize investments to gain knowledge in critical areas or, conversely, to identify areas where additional data has little effect on the metrics of interest.

  1. [Shock in obstetrics. Institutional experience].

    PubMed

    Bonfante Ramírez, E; Ahued Ahued, R; García-Benítez, C Q; Bolaños Ancona, R; Callejos, T; Juárez García, L

    1997-04-01

    Shock is one of the most difficult problems an obstetrician can face. Hemorrhage is the main reason of shock. A descriptive and retrospective research was conducted at Instituto Nacional de Perinatología, from January 1992 to May 1996, including all patients admitted to the intensive care unit with diagnosis of shock. There were found 90 cases with diagnosis of shock, 82 were hipovolemic, and 8 cases had the septic kind of shock. The average of age was 32.2 years, with a gestational age between 6.2 to 41.4 weeks . There were 71 healthy patients, hypertension was associated to pregnancy in 9 cases, infertility in two, myomatosis in 2, and diabetes in 2 more patients. Other 5 cases reported different pathologies. The most frequent cause for hipovolemic shock resulted to be placenta acreta (40 cases), followed by uterine tone alterations in 37 patients, ectopic pregnancy in 7, uterine rupture or perforation in 4, and vaginal or cervical lacerations in 2. The estimated blood loss varied from 2200 cc to 6500 cc, and the minimal arterial pressure registered during shock was between 40/20 mmHg to 90/60 mmHg. Medical initial assistance consisted in volume reposition with crystalloids, globular packages, and plasma expansors in 73 patients (81.1%). The rest of the patients received in addition coloids, platelets and cryoprecipitates. A total of 76 patients required surgical intervention consisting in total abdominal hysterectomy. In 5 cases the previous surgical procedure was done and ligation of hypogastric vessels was needed. Salpingectomy was performed in 5 patients, and rupture or perforation repair in 3. The average surgery time was 2 hours and 33 minutes. The observed complications were 7 cases with abscess of the cupula, consumption coagulopathy in 2, 1 vesical quirurgical injury, 1 intestinal occlusion, and 11 vesico-vaginal fistula. The average days of hospitalization resulted to be 5. The most frequent kind of shock seen by obstetricians is the hipovolemic type

  2. Probabilistic population projections with migration uncertainty.

    PubMed

    Azose, Jonathan J; Ševčíková, Hana; Raftery, Adrian E

    2016-06-07

    We produce probabilistic projections of population for all countries based on probabilistic projections of fertility, mortality, and migration. We compare our projections to those from the United Nations' Probabilistic Population Projections, which uses similar methods for fertility and mortality but deterministic migration projections. We find that uncertainty in migration projection is a substantial contributor to uncertainty in population projections for many countries. Prediction intervals for the populations of Northern America and Europe are over 70% wider, whereas prediction intervals for the populations of Africa, Asia, and the world as a whole are nearly unchanged. Out-of-sample validation shows that the model is reasonably well calibrated.

  3. Probabilistic machine learning and artificial intelligence.

    PubMed

    Ghahramani, Zoubin

    2015-05-28

    How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.

  4. Probabilistic machine learning and artificial intelligence

    NASA Astrophysics Data System (ADS)

    Ghahramani, Zoubin

    2015-05-01

    How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.

  5. Characterization of Shocked Beryllium

    SciTech Connect

    Cady, Carl M; Adams, Chris D; Hull, Lawrence M; Gray III, George T; Prime, Michael B; Addessio, Francis L; Wynn, Thomas A; Brown, Eric N

    2012-08-24

    accelerate the material. Preliminary analysis of the results appears to indicate that, if fractured by the initial shock loading, the S200F Be remains sufficiently intact to support a shear stress following partial release and subsequent shock re-loading of the material. Additional 'arrested' drive shots were designed and tested to minimize the reflected tensile pulse in the sample. These tests were done to both validate the model and to put large shock induced compressive loads into the beryllium sample.

  6. Shock Wave Technology and Application: An Update☆

    PubMed Central

    Rassweiler, Jens J.; Knoll, Thomas; Köhrmann, Kai-Uwe; McAteer, James A.; Lingeman, James E.; Cleveland, Robin O.; Bailey, Michael R.; Chaussy, Christian

    2012-01-01

    Context The introduction of new lithotripters has increased problems associated with shock wave application. Recent studies concerning mechanisms of stone disintegration, shock wave focusing, coupling, and application have appeared that may address some of these problems. Objective To present a consensus with respect to the physics and techniques used by urologists, physicists, and representatives of European lithotripter companies. Evidence acquisition We reviewed recent literature (PubMed, Embase, Medline) that focused on the physics of shock waves, theories of stone disintegration, and studies on optimising shock wave application. In addition, we used relevant information from a consensus meeting of the German Society of Shock Wave Lithotripsy. Evidence synthesis Besides established mechanisms describing initial fragmentation (tear and shear forces, spallation, cavitation, quasi-static squeezing), the model of dynamic squeezing offers new insight in stone comminution. Manufacturers have modified sources to either enlarge the focal zone or offer different focal sizes. The efficacy of extracorporeal shock wave lithotripsy (ESWL) can be increased by lowering the pulse rate to 60–80 shock waves/min and by ramping the shock wave energy. With the water cushion, the quality of coupling has become a critical factor that depends on the amount, viscosity, and temperature of the gel. Fluoroscopy time can be reduced by automated localisation or the use of optical and acoustic tracking systems. There is a trend towards larger focal zones and lower shock wave pressures. Conclusions New theories for stone disintegration favour the use of shock wave sources with larger focal zones. Use of slower pulse rates, ramping strategies, and adequate coupling of the shock wave head can significantly increase the efficacy and safety of ESWL. PMID:21354696

  7. Shock sensing dual mode warhead

    SciTech Connect

    Shamblen, M.; Walchak, M.T.; Richmond, L.

    1980-12-31

    A shock sensing dual mode warhead is provided for use against both soft and hard targets and is capable of sensing which type of target has been struck. The warhead comprises a casing made of a ductile material containing an explosive charge and a fuze assembly. The ductile warhead casing will mushroom upon striking a hard target while still confining the explosive. Proper ductility and confinement are necessary for fuze shock sensing. The fuze assembly contains a pair of parallel firing trains, one initiated only by dynamic pressure caused high impact deceleration and one initiated by low impact deceleration. The firing train actuated by high impact deceleration senses dynamic pressure transmitted, during deformation of the warhead, through the explosive filler which is employed as a fuzing signature. The firing train actuated by low impact deceleration contains a pyrotechnic delay to allow penetration of soft targets.

  8. Vibrational energy transfer in shocked molecular crystals.

    PubMed

    Hooper, Joe

    2010-01-07

    We consider the process of establishing thermal equilibrium behind an ideal shock front in molecular crystals and its possible role in initiating chemical reaction at high shock pressures. A new theory of equilibration via multiphonon energy transfer is developed to treat the scattering of shock-induced phonons into internal molecular vibrations. Simple analytic forms are derived for the change in this energy transfer at different Hugoniot end states following shock compression. The total time required for thermal equilibration is found to be an order of magnitude or faster than proposed in previous work; in materials representative of explosive molecular crystals, equilibration is predicted to occur within a few picoseconds following the passage of an ideal shock wave. Recent molecular dynamics calculations are consistent with these time scales. The possibility of defect-induced temperature localization due purely to nonequilibrium phonon processes is studied by means of a simple model of the strain field around an inhomogeneity. The specific case of immobile straight dislocations is studied, and a region of enhanced energy transfer on the order of 5 nm is found. Due to the rapid establishment of thermal equilibrium, these regions are unrelated to the shock sensitivity of a material but may allow temperature localization at high shock pressures. Results also suggest that if any decomposition due to molecular collisions is occurring within the shock front itself, these collisions are not enhanced by any nonequilibrium thermal state.

  9. Shock-induced arrhythmogenesis in the myocardium

    NASA Astrophysics Data System (ADS)

    Trayanova, Natalia; Eason, James

    2002-09-01

    The focus of this article is the investigation of the electrical behavior of the normal myocardium following the delivery of high-strength defibrillation shocks. To achieve its goal, the study employs a complex three-dimensional defibrillation model of a slice of the canine heart characterized with realistic geometry and fiber architecture. Defibrillation shocks of various strengths and electrode configurations are delivered to the model preparation in which a sustained ventricular tachycardia is induced. Instead of analyzing the post-shock electrical events as progressions of transmembrane potential maps, the study examines the evolution of the postshock phase singularities (PSs) which represent the organizing centers of reentry. The simulation results demonstrate that the shock induces numerous PSs the majority of which vanish before the reentrant wavefronts associated with them complete half of a single rotation. Failed shocks are characterized with one or more PSs that survive the initial period of PS annihilation to establish a new postshock arrhythmia. The increase in shock strength results in an overall decrease of the number of PSs that survive over 200 ms after the end of the shock; however, the exact behavior of the PSs is strongly dependent on the shock electrode configuration.

  10. Non-unitary probabilistic quantum computing circuit and method

    NASA Technical Reports Server (NTRS)

    Williams, Colin P. (Inventor); Gingrich, Robert M. (Inventor)

    2009-01-01

    A quantum circuit performing quantum computation in a quantum computer. A chosen transformation of an initial n-qubit state is probabilistically obtained. The circuit comprises a unitary quantum operator obtained from a non-unitary quantum operator, operating on an n-qubit state and an ancilla state. When operation on the ancilla state provides a success condition, computation is stopped. When operation on the ancilla state provides a failure condition, computation is performed again on the ancilla state and the n-qubit state obtained in the previous computation, until a success condition is obtained.

  11. [Vasopressin use in shock].

    PubMed

    Carrillo-Esper, Raúl; González-Salazar, Jorge A; Calvo-Carrillo, Benjamín

    2004-01-01

    Arginine-vasopressin (VP), also known as the antidiuretic hormone, is essential for water homeostasis. Its synthesis and liberation depends on regulation of osmotic, hypovolemic, hormonal, and nonosmotic stimuli. It has been demonstrated that it is key for maintenance of cardiovascular homeostasis through vasomotor regulation, the determinant of systemic vascular resistance and mean arterial pressure, a process acting through V1 receptors. Shock state with refractory vasodilation seen in sepsis, systemic inflammatory response, hypovolemia, cardiac arrest, polytrauma, etc., is characterized by an initial phase of liberation and increased levels of VP followed by a second phase characterized by inappropriately low levels of this hormone that are associated with refractoriness to management with volume, inotropics, and vasopressors. It has been demonstrated in clinical and experimental studies that exogenous VP treatment under this condition increases systemic vascular resistance, perfusion pressure, and oxygen supply to peripheral tissues, which makes it possible to decrease and to suspend vasopressors and also to increase survival.

  12. Gated IR Images of Shocked Surfaces

    SciTech Connect

    S. S. Lutz; W. D. Turley; P. M. Rightley; L. E. Primas

    2001-06-01

    Gated infrared (IR) images have been taken of a series of shocked surface geometries in tin. Metal coupons machined with steps and flats were mounted directly to the high explosive. The explosive was point-initiated and 500-ns to 1-microsecond-wide gated images of the target were taken immediately following shock breakout using a Santa Barbara Focalplane InSb camera (SBF-134). Spatial distributions of surface radiance were extracted from the images of the shocked samples and found to be non-single-valued. Several geometries were modeled using CTH, a two-dimensional Eulerian hydrocode.

  13. Gated IR images of shocked surfaces.

    SciTech Connect

    Lutz, S. S.; Turley, W. D.; Rightley, P. M.; Primas, L. E.

    2001-01-01

    Gated infrared (IR) images have been taken of a series of shocked surface geometries in tin. Metal coupons machined with steps and flats were mounted directly to the high explosive. The explosive was point-initiated and 500-ns to 1-microsecond-wide gated images of the target were taken immediately following shock breakout using a Santa Barbara Focalplane InSb camera (SBF-134). Spatial distributions of surface radiance were extracted from the images of the shocked samples and found to be non-single-valued. Several geometries were modeled using CTH, a two-dimensional Eulerian hydrocode.

  14. Second sound shock waves and critical velocities in liquid helium 2. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Turner, T. N.

    1979-01-01

    Large amplitude second-sound shock waves were generated and the experimental results compared to the theory of nonlinear second-sound. The structure and thickness of second-sound shock fronts are calculated and compared to experimental data. Theoretically it is shown that at T = 1.88 K, where the nonlinear wave steepening vanishes, the thickness of a very weak shock must diverge. In a region near this temperature, a finite-amplitude shock pulse evolves into an unusual double-shock configuration consisting of a front steepened, temperature raising shock followed by a temperature lowering shock. Double-shocks are experimentally verified. It is experimentally shown that very large second-sound shock waves initiate a breakdown in the superfluidity of helium 2, which is dramatically displayed as a limit to the maximum attainable shock strength. The value of the maximum shock-induced relative velocity represents a significant lower bound to the intrinsic critical velocity of helium 2.

  15. How Is Cardiogenic Shock Diagnosed?

    MedlinePlus

    ... from the NHLBI on Twitter. How Is Cardiogenic Shock Diagnosed? The first step in diagnosing cardiogenic shock ... is cardiogenic shock. Tests and Procedures To Diagnose Shock and Its Underlying Causes Blood Pressure Test Medical ...

  16. Non-unitary probabilistic quantum computing

    NASA Technical Reports Server (NTRS)

    Gingrich, Robert M.; Williams, Colin P.

    2004-01-01

    We present a method for designing quantum circuits that perform non-unitary quantum computations on n-qubit states probabilistically, and give analytic expressions for the success probability and fidelity.

  17. Probabilistic micromechanics for high-temperature composites

    NASA Technical Reports Server (NTRS)

    Reddy, J. N.

    1993-01-01

    The three-year program of research had the following technical objectives: the development of probabilistic methods for micromechanics-based constitutive and failure models, application of the probabilistic methodology in the evaluation of various composite materials and simulation of expected uncertainties in unidirectional fiber composite properties, and influence of the uncertainties in composite properties on the structural response. The first year of research was devoted to the development of probabilistic methodology for micromechanics models. The second year of research focused on the evaluation of the Chamis-Hopkins constitutive model and Aboudi constitutive model using the methodology developed in the first year of research. The third year of research was devoted to the development of probabilistic finite element analysis procedures for laminated composite plate and shell structures.

  18. Toxic Shock Syndrome

    MedlinePlus

    ... toxic shock syndrome results from toxins produced by Staphylococcus aureus (staph) bacteria, but the condition may also ... a skin or wound infection. Bacteria, most commonly Staphylococcus aureus (staph), causes toxic shock syndrome. It can ...

  19. Neptune inbound bow shock

    NASA Technical Reports Server (NTRS)

    Szabo, Adam; Lepping, Ronald P.

    1995-01-01

    Voyager 2 crossed the inbound or upstream Neptunian bow shock at 1430 spacecraft event time on August 24 in 1989 (Belcher et al., 1989). The plasma and magnetic field measurements allow us to study the solar wind interaction with the outermost gas giant. To fully utilize all of the spacecraft observations, an improved nonlinear least squares, 'Rankine-Hugoniot' magnetohydrodynamic shock-fitting technique has been developed (Szabo, 1994). This technique is applied to the Neptunian data set. We find that the upstream bow shock normal points nearly exactly toward the Sun consistent with any reasonable large-scale model of the bow shock for a near subsolar crossing. The shock was moving outward with a speed of 14 +/- 12 km/s. The shock can be characterized as a low beta, high Mach number, strong quasi-perpendicular shock. Finally, the shock microstructure features are resolved and found to scale well with theoretical expectations.

  20. Shock & Anaphylactic Shock. Learning Activity Package.

    ERIC Educational Resources Information Center

    Hime, Kirsten

    This learning activity package on shock and anaphylactic shock is one of a series of 12 titles developed for use in health occupations education programs. Materials in the package include objectives, a list of materials needed, information sheets, reviews (self evaluations) of portions of the content, and answers to reviews. These topics are…

  1. Probabilistic regularization in inverse optical imaging.

    PubMed

    De Micheli, E; Viano, G A

    2000-11-01

    The problem of object restoration in the case of spatially incoherent illumination is considered. A regularized solution to the inverse problem is obtained through a probabilistic approach, and a numerical algorithm based on the statistical analysis of the noisy data is presented. Particular emphasis is placed on the question of the positivity constraint, which is incorporated into the probabilistically regularized solution by means of a quadratic programming technique. Numerical examples illustrating the main steps of the algorithm are also given.

  2. Probabilistic Approaches for Evaluating Space Shuttle Risks

    NASA Technical Reports Server (NTRS)

    Vesely, William

    2001-01-01

    The objectives of the Space Shuttle PRA (Probabilistic Risk Assessment) are to: (1) evaluate mission risks; (2) evaluate uncertainties and sensitivities; (3) prioritize contributors; (4) evaluate upgrades; (5) track risks; and (6) provide decision tools. This report discusses the significance of a Space Shuttle PRA and its participants. The elements and type of losses to be included are discussed. The program and probabilistic approaches are then discussed.

  3. Probabilistic cloning of three symmetric states

    SciTech Connect

    Jimenez, O.; Bergou, J.; Delgado, A.

    2010-12-15

    We study the probabilistic cloning of three symmetric states. These states are defined by a single complex quantity, the inner product among them. We show that three different probabilistic cloning machines are necessary to optimally clone all possible families of three symmetric states. We also show that the optimal cloning probability of generating M copies out of one original can be cast as the quotient between the success probability of unambiguously discriminating one and M copies of symmetric states.

  4. Parallel and Distributed Systems for Probabilistic Reasoning

    DTIC Science & Technology

    2012-12-01

    High-Level Abstractions . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 8 Future Work 156 8.1 Scalable Online Probabilistic Reasoning...this chapter can be obtained from our online repository at http://gonzalezlabs/thesis. 3.1 Belief Propagation A core operation in probabilistic...models is not strictly novel. In the setting of online inference in Russell and Norvig [1995] used the notion of Fixed Lag Smoothing to eliminate the

  5. Biomass shock pretreatment

    SciTech Connect

    Holtzapple, Mark T.; Madison, Maxine Jones; Ramirez, Rocio Sierra; Deimund, Mark A.; Falls, Matthew; Dunkelman, John J.

    2014-07-01

    Methods and apparatus for treating biomass that may include introducing a biomass to a chamber; exposing the biomass in the chamber to a shock event to produce a shocked biomass; and transferring the shocked biomass from the chamber. In some aspects, the method may include pretreating the biomass with a chemical before introducing the biomass to the chamber and/or after transferring shocked biomass from the chamber.

  6. Shock, Post-Shock Annealing, and Post-Annealing Shock in Ureilites

    NASA Technical Reports Server (NTRS)

    Rubin, Alan E.

    2006-01-01

    The thermal and shock histories of ureilites can be divided into four periods: 1) formation, 2) initial shock, 3) post-shock annealing, and 4) post-annealing shock. Period 1 occurred approx.4.55 Ga ago when ureilites formed by melting chondritic material. Impact events during period 2 caused silicate darkening, undulose to mosaic extinction in olivines, and the formation of diamond, lonsdaleite, and chaoite from indigenous carbonaceous material. Alkali-rich fine-grained silicates may have been introduced by impact injection into ureilites during this period. About 57% of the ureilites were unchanged after period 2. During period 3 events, impact-induced annealing caused previously mosaicized olivine grains to become aggregates of small unstrained crystals. Some ureilites experienced reduction as FeO at the edges of olivine grains reacted with C from the matrix. Annealing may also be responsible for coarsening of graphite in a few ureilites, forming euhedral-appearing, idioblastic crystals. Orthopyroxene in Meteorite Hills (MET) 78008 may have formed from pigeonite by annealing during this period. The Rb-Sr internal isochron age of approx.4.0 Ga for MET 78008 probably dates the annealing event. At this late date, impacts are the only viable heat source. About 36% of ureilites experienced period 3 events, but remained unchanged afterwards. During period 4, approx.7% of the ureilites were shocked again, as is evident in the polymict breccia, Elephant Moraine (EET) 83309. This rock contains annealed mosaicized olivine aggregates composed of small individual olivine crystals that exhibit undulose extinction. Ureilites may have formed by impact-melting chondritic material on a primitive body with heterogeneous O isotopes. Plagioclase was preferentially lost from the system due to its low impedance to shock compression. Brief melting and rapid burial minimized the escape of planetary-type noble gases from the ureilitic melts. Incomplete separation of metal from silicates

  7. Probabilistic Prediction of Lifetimes of Ceramic Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Gyekenyesi, John P.; Jadaan, Osama M.; Palfi, Tamas; Powers, Lynn; Reh, Stefan; Baker, Eric H.

    2006-01-01

    ANSYS/CARES/PDS is a software system that combines the ANSYS Probabilistic Design System (PDS) software with a modified version of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) Version 6.0 software. [A prior version of CARES/Life was reported in Program for Evaluation of Reliability of Ceramic Parts (LEW-16018), NASA Tech Briefs, Vol. 20, No. 3 (March 1996), page 28.] CARES/Life models effects of stochastic strength, slow crack growth, and stress distribution on the overall reliability of a ceramic component. The essence of the enhancement in CARES/Life 6.0 is the capability to predict the probability of failure using results from transient finite-element analysis. ANSYS PDS models the effects of uncertainty in material properties, dimensions, and loading on the stress distribution and deformation. ANSYS/CARES/PDS accounts for the effects of probabilistic strength, probabilistic loads, probabilistic material properties, and probabilistic tolerances on the lifetime and reliability of the component. Even failure probability becomes a stochastic quantity that can be tracked as a response variable. ANSYS/CARES/PDS enables tracking of all stochastic quantities in the design space, thereby enabling more precise probabilistic prediction of lifetimes of ceramic components.

  8. Probabilistic simulation of uncertainties in thermal structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Shiao, Michael

    1990-01-01

    Development of probabilistic structural analysis methods for hot structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) quantification of the effects of uncertainties for several variables on high pressure fuel turbopump (HPFT) blade temperature, pressure, and torque of the Space Shuttle Main Engine (SSME); (2) the evaluation of the cumulative distribution function for various structural response variables based on assumed uncertainties in primitive structural variables; (3) evaluation of the failure probability; (4) reliability and risk-cost assessment, and (5) an outline of an emerging approach for eventual hot structures certification. Collectively, the results demonstrate that the structural durability/reliability of hot structural components can be effectively evaluated in a formal probabilistic framework. In addition, the approach can be readily extended to computationally simulate certification of hot structures for aerospace environments.

  9. Submarine Propulsion Shaft Life: Probabilistic Prediction and Extension through Prevention of Water Ingress

    DTIC Science & Technology

    2014-06-01

    be familiar with other concepts from his work, Italian economist Vilfredo Pareto is the namesake of a less commonly known distribution function, or...of Pareto Distributions ....................................................................................................... 48 Appendix B...images of titanium. Their pit growth model has a probabilistic initial current dependent on the clusters of these particles, modeled as a Pareto

  10. The Generation of Kappa distributions At Perpendicular Shocks And The Heliospheric Termination Shock

    NASA Astrophysics Data System (ADS)

    Zank, G. P.

    2015-12-01

    Although wave-particle interactions may maintain a pre-existing kappa distribution throughout the solar wind once formed, an important question is to identify the origin of a quasi-kappa distribution. It transpires that the dissipation mechanism at quasi-perpendicular shocks and the so-called injection problem at shock waves may be of particular relevance to the formation of initial quasi-kappa distributions throughout the solar wind. In particular, the question of how particles are injected into the diffusive shock acceleration (DSA) mechanism at a perpendicular shock can be addressed on the basis of a kappa distribution. We discuss briefly the possible formation of a kappa distribution at interplanetary shocks and show how this is then accelerated at a quasi-perpendicular shock. These results are related to observed energetic particle spectra downstream of an interplanetary shock. The related question of the dissipation mechanism at the quasi-perpendicular heliospheric termination shock is discussed, focusing particularly in the important role of the pickup ion distribution upstream and downstream of the heliospheric termination shock. We show that the downstream proton distribution in the inner heliosheath closely resembles a kappa distribution.

  11. History of shock wave lithotripsy

    NASA Astrophysics Data System (ADS)

    Delius, Michael

    2000-07-01

    The first reports on the fragmentation of human calculi with ultrasound appeared in the fifties. Initial positive results with an extracorporeal approach with continuous wave ultrasound could, however, not be reproduced. A more promising result was found by generating the acoustic energy either in pulsed or continuous form directly at the stone surface. The method was applied clinically with success. Extracorporeal shock-wave generators unite the principle of using single ultrasonic pulses with the principle of generating the acoustic energy outside the body and focusing it through the skin and body wall onto the stone. Häusler and Kiefer reported the first successful contact-free kidney stone destruction by shock waves. They had put the stone in a water filled cylinder and generated a shock wave with a high speed water drop which was fired onto the water surface. To apply the new principle in medicine, both Häusler and Hoff's group at Dornier company constructed different shock wave generators for the stone destruction; the former used a torus-shaped reflector around an explosion wire, the latter the electrode-ellipsoid system. The former required open surgery to access the kidney stone, the latter did not. It was introduced into clinical practice after a series of experiments in Munich.

  12. Probabilistic risk assessment familiarization training

    SciTech Connect

    Phillabaum, J.L.

    1989-01-01

    Philadelphia Electric Company (PECo) created a Nuclear Group Risk and Reliability Assessment Program Plan in order to focus the utilization of probabilistic risk assessment (PRA) in support of Limerick Generating Station and Peach Bottom Atomic Power Station. The continuation of a PRA program was committed by PECo to the U.S. Nuclear Regulatory Commission (NRC) prior to be the issuance of an operating license for Limerick Unit 1. It is believed that increased use of PRA techniques to support activities at Limerick and Peach Bottom will enhance PECo's overall nuclear excellence. Training for familiarization with PRA is designed for attendance once by all nuclear group personnel to understand PRA and its potential effect on their jobs. The training content describes the history of PRA and how it applies to PECo's nuclear activities. Key PRA concepts serve as the foundation for the familiarization training. These key concepts are covered in all classes to facilitate an appreciation of the remaining material, which is tailored to the audience. Some of the concepts covered are comparison of regulatory philosophy to PRA techniques, fundamentals of risk/success, risk equation/risk summation, and fault trees and event trees. Building on the concepts, PRA insights and applications are then described that are tailored to the audience.

  13. Probabilistic elastography: estimating lung elasticity.

    PubMed

    Risholm, Petter; Ross, James; Washko, George R; Wells, William M

    2011-01-01

    We formulate registration-based elastography in a probabilistic framework and apply it to study lung elasticity in the presence of emphysematous and fibrotic tissue. The elasticity calculations are based on a Finite Element discretization of a linear elastic biomechanical model. We marginalize over the boundary conditions (deformation) of the biomechanical model to determine the posterior distribution over elasticity parameters. Image similarity is included in the likelihood, an elastic prior is included to constrain the boundary conditions, while a Markov model is used to spatially smooth the inhomogeneous elasticity. We use a Markov Chain Monte Carlo (MCMC) technique to characterize the posterior distribution over elasticity from which we extract the most probable elasticity as well as the uncertainty of this estimate. Even though registration-based lung elastography with inhomogeneous elasticity is challenging due the problem's highly underdetermined nature and the sparse image information available in lung CT, we show promising preliminary results on estimating lung elasticity contrast in the presence of emphysematous and fibrotic tissue.

  14. Probabilistic modeling of children's handwriting

    NASA Astrophysics Data System (ADS)

    Puri, Mukta; Srihari, Sargur N.; Hanson, Lisa

    2013-12-01

    There is little work done in the analysis of children's handwriting, which can be useful in developing automatic evaluation systems and in quantifying handwriting individuality. We consider the statistical analysis of children's handwriting in early grades. Samples of handwriting of children in Grades 2-4 who were taught the Zaner-Bloser style were considered. The commonly occurring word "and" written in cursive style as well as hand-print were extracted from extended writing. The samples were assigned feature values by human examiners using a truthing tool. The human examiners looked at how the children constructed letter formations in their writing, looking for similarities and differences from the instructions taught in the handwriting copy book. These similarities and differences were measured using a feature space distance measure. Results indicate that the handwriting develops towards more conformity with the class characteristics of the Zaner-Bloser copybook which, with practice, is the expected result. Bayesian networks were learnt from the data to enable answering various probabilistic queries, such as determining students who may continue to produce letter formations as taught during lessons in school and determining the students who will develop a different and/or variation of the those letter formations and the number of different types of letter formations.

  15. Optimal probabilistic dense coding schemes

    NASA Astrophysics Data System (ADS)

    Kögler, Roger A.; Neves, Leonardo

    2017-04-01

    Dense coding with non-maximally entangled states has been investigated in many different scenarios. We revisit this problem for protocols adopting the standard encoding scheme. In this case, the set of possible classical messages cannot be perfectly distinguished due to the non-orthogonality of the quantum states carrying them. So far, the decoding process has been approached in two ways: (i) The message is always inferred, but with an associated (minimum) error; (ii) the message is inferred without error, but only sometimes; in case of failure, nothing else is done. Here, we generalize on these approaches and propose novel optimal probabilistic decoding schemes. The first uses quantum-state separation to increase the distinguishability of the messages with an optimal success probability. This scheme is shown to include (i) and (ii) as special cases and continuously interpolate between them, which enables the decoder to trade-off between the level of confidence desired to identify the received messages and the success probability for doing so. The second scheme, called multistage decoding, applies only for qudits ( d-level quantum systems with d>2) and consists of further attempts in the state identification process in case of failure in the first one. We show that this scheme is advantageous over (ii) as it increases the mutual information between the sender and receiver.

  16. Probabilistic description of traffic flow

    NASA Astrophysics Data System (ADS)

    Mahnke, R.; Kaupužs, J.; Lubashevsky, I.

    2005-03-01

    A stochastic description of traffic flow, called probabilistic traffic flow theory, is developed. The general master equation is applied to relatively simple models to describe the formation and dissolution of traffic congestions. Our approach is mainly based on spatially homogeneous systems like periodically closed circular rings without on- and off-ramps. We consider a stochastic one-step process of growth or shrinkage of a car cluster (jam). As generalization we discuss the coexistence of several car clusters of different sizes. The basic problem is to find a physically motivated ansatz for the transition rates of the attachment and detachment of individual cars to a car cluster consistent with the empirical observations in real traffic. The emphasis is put on the analogy with first-order phase transitions and nucleation phenomena in physical systems like supersaturated vapour. The results are summarized in the flux-density relation, the so-called fundamental diagram of traffic flow, and compared with empirical data. Different regimes of traffic flow are discussed: free flow, congested mode as stop-and-go regime, and heavy viscous traffic. The traffic breakdown is studied based on the master equation as well as the Fokker-Planck approximation to calculate mean first passage times or escape rates. Generalizations are developed to allow for on-ramp effects. The calculated flux-density relation and characteristic breakdown times coincide with empirical data measured on highways. Finally, a brief summary of the stochastic cellular automata approach is given.

  17. Symbolic representation of probabilistic worlds.

    PubMed

    Feldman, Jacob

    2012-04-01

    Symbolic representation of environmental variables is a ubiquitous and often debated component of cognitive science. Yet notwithstanding centuries of philosophical discussion, the efficacy, scope, and validity of such representation has rarely been given direct consideration from a mathematical point of view. This paper introduces a quantitative measure of the effectiveness of symbolic representation, and develops formal constraints under which such representation is in fact warranted. The effectiveness of symbolic representation hinges on the probabilistic structure of the environment that is to be represented. For arbitrary probability distributions (i.e., environments), symbolic representation is generally not warranted. But in modal environments, defined here as those that consist of mixtures of component distributions that are narrow ("spiky") relative to their spreads, symbolic representation can be shown to represent the environment with a relatively negligible loss of information. Modal environments support propositional forms, logical relations, and other familiar features of symbolic representation. Hence the assumption that our environment is, in fact, modal is a key tacit assumption underlying the use of symbols in cognitive science.

  18. Dynamical systems probabilistic risk assessment

    SciTech Connect

    Denman, Matthew R.; Ames, Arlo Leroy

    2014-03-01

    Probabilistic Risk Assessment (PRA) is the primary tool used to risk-inform nuclear power regulatory and licensing activities. Risk-informed regulations are intended to reduce inherent conservatism in regulatory metrics (e.g., allowable operating conditions and technical specifications) which are built into the regulatory framework by quantifying both the total risk profile as well as the change in the risk profile caused by an event or action (e.g., in-service inspection procedures or power uprates). Dynamical Systems (DS) analysis has been used to understand unintended time-dependent feedbacks in both industrial and organizational settings. In dynamical systems analysis, feedback loops can be characterized and studied as a function of time to describe the changes to the reliability of plant Structures, Systems and Components (SSCs). While DS has been used in many subject areas, some even within the PRA community, it has not been applied toward creating long-time horizon, dynamic PRAs (with time scales ranging between days and decades depending upon the analysis). Understanding slowly developing dynamic effects, such as wear-out, on SSC reliabilities may be instrumental in ensuring a safely and reliably operating nuclear fleet. Improving the estimation of a plant's continuously changing risk profile will allow for more meaningful risk insights, greater stakeholder confidence in risk insights, and increased operational flexibility.

  19. MHD shocks in the ISM

    NASA Technical Reports Server (NTRS)

    Chernoff, D. F.; Hollenbach, David J.; Mckee, Christopher F.

    1990-01-01

    Researchers survey shock solutions of a partially ionized gas with a magnetic field. The gas is modeled by interacting neutral, ion, electron and charged grain components. They employ a small neutral-ion chemical network to follow the dissociation and ionization of the major species. Cooling by molecular hydrogen (rotational, vibrational and dissociation), grains and dipole molecules is included. There are three basic types of solutions (C, C asterisk, and J) and some more complicated flows involving combinations of the basic types. The initial preshock conditions cover hydrogen nuclei densities of 1 less than n less than 10(exp 10) cm(-3) and shock velocities of 5 less than v(sub s) less than 60 km/s. The magnetic field is varied over 5 decades and the sensitivity of the results to grain parameters, UV and cosmic ray fluxes is ascertained. The parameter space is quite complicated, but there exist some simple divisions. When the initial ionization fraction is small (chi sub i less than 10(-5)), there is a sharp transition between fully C solutions at low velocity and strong J solutions at high velocity. When the initial ionization fraction is larger, C asterisk and/or very weak J shocks are present at low velocities in addition to the C solutions. The flow again changes to strong J shocks at high velocities. When the ionization fraction is large and the flow is only slightly greater than the bulk Alfven velocity, there is a complicated mixture of C, C asterisk and J solutions.

  20. Implementation of Probabilistic Design Methodology at Tennessee State University

    NASA Technical Reports Server (NTRS)

    Onwubiko, Chinyere

    1996-01-01

    Engineering Design is one of the most important areas in engineering education. Deterministic Design Methodology (DDM) is the only design method that is taught in most engineering schools. This method does not give a direct account of uncertainties in design parameters. Hence, it is impossible to quantify the uncertainties in the response and the actual safety margin remains unknown. The desire for a design methodology tha can identify the primitive (random) variables that affect the structural behavior has led to a growing interest on Probabilistic Design Methodology (PDM). This method is gaining more recognition in industries than in educational institutions. Some of the reasons for the limited use of the PDM at the moment are that many are unaware of its potentials, and most of the software developed for PDM are very recent. The central goal of the PDM project at Tennessee State University is to introduce engineering students to the method. The students participating in the project learn about PDM and the computer codes that are available to the design engineer. The software being used of this project is NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) developed under NASA probabilistic structural analysis program. NESSUS has three different modules which make it a very comprehensive computer code for PDM. A research in technology transfer through course offering in PDM is in effect a Tennessee State University. The aim is to familiarize students with the problem of uncertainties in engineering design. Included in the paper are some projects on PDM carried out by some students and faculty. The areas this method is being applied at the moment include, Design of Gears (spur and worm); Design of Shafts; Design of Statistically Indeterminate Frame Structures; Design of Helical Springs; and Design of Shock Absorbers. Some of the current results of these projects are presented.

  1. Implementation of probabilistic design methodology at Tennessee State University

    NASA Technical Reports Server (NTRS)

    Onwubiko, Chinyere

    1995-01-01

    The fact that Deterministic Design Method no longer satisfies most design needs calls for methods that will cope with the high trend in technology. The advance in computer technology has reduced the rigors that normally accompany many design analysis methods that account for uncertainties in design parameters. Probabilistic Design Methodology (PDM) is beginning to make impact in engineering design. This method is gaining more recognition in industries than in educational institutions. Some of the reasons for the limited use of the PDM at the moment are that many are unaware of its potentials, and most of the software developed for PDM are very recent. The central goal of the PDM project at Tennessee State University is to introduce engineering students to this method. The students participating in the project learn about PDM and the computer codes that are available to the design engineer. The software being used for this project is NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) developed under NASA probabilistic structural analysis program. NESSUS has three different modules which make it a very comprehensive computer code for PDM. Since this method is new to the students, its introduction into the engineering curriculum is to be in stages. These range from the introduction of PDM and its software to the applications. While this program is being developed for its eventual inclusion into the engineering curriculum, some graduate and undergraduate students are already carrying out some projects using this method. As the students are increasing their understanding on PDM, they are at the same time applying it to some common design problems. The areas this method is being applied at the moment include, Design of Gears (spur and worm); Design of Brakes; Design of Heat Exchangers Design of Helical Springs; and Design of Shock Absorbers. Some of the current results of these projects are presented.

  2. Architecture for Integrated Medical Model Dynamic Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    Jaworske, D. A.; Myers, J. G.; Goodenow, D.; Young, M.; Arellano, J. D.

    2016-01-01

    Probabilistic Risk Assessment (PRA) is a modeling tool used to predict potential outcomes of a complex system based on a statistical understanding of many initiating events. Utilizing a Monte Carlo method, thousands of instances of the model are considered and outcomes are collected. PRA is considered static, utilizing probabilities alone to calculate outcomes. Dynamic Probabilistic Risk Assessment (dPRA) is an advanced concept where modeling predicts the outcomes of a complex system based not only on the probabilities of many initiating events, but also on a progression of dependencies brought about by progressing down a time line. Events are placed in a single time line, adding each event to a queue, as managed by a planner. Progression down the time line is guided by rules, as managed by a scheduler. The recently developed Integrated Medical Model (IMM) summarizes astronaut health as governed by the probabilities of medical events and mitigation strategies. Managing the software architecture process provides a systematic means of creating, documenting, and communicating a software design early in the development process. The software architecture process begins with establishing requirements and the design is then derived from the requirements.

  3. RAVEN and Dynamic Probabilistic Risk Assessment: Software overview

    SciTech Connect

    Andrea Alfonsi; Cristian Rabiti; Diego Mandelli; Joshua Cogliati; Robert Kinoshita; Antonio Naviglio

    2014-09-01

    RAVEN is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. The initial development was aimed to provide dynamic risk analysis capabilities to the Thermo-Hydraulic code RELAP-7 [], currently under development at the Idaho National Laboratory. Although the initial goal has been fully accomplished, RAVEN is now a multi-purpose probabilistic and uncertainty quantification platform, capable to agnostically communicate with any system code. This agnosticism has been employed by providing Application Programming Interfaces (APIs). These interfaces are used to allow RAVEN to interact with any code as long as all the parameters that need to be perturbed are accessible by inputs files or via python interfaces. RAVEN is capable to investigate the system response, investigating the input space using Monte Carlo, Grid, or Latin Hyper Cube sampling schemes, but its strength is focused toward system feature discovery, such as limit surfaces, separating regions of the input space leading to system failure, using dynamic supervised learning techniques. The paper presents an overview of the software capabilities and their implementation schemes followed by some application examples.

  4. Probabilistic Modeling of Settlement Risk at Land Disposal Facilities - 12304

    SciTech Connect

    Foye, Kevin C.; Soong, Te-Yang

    2012-07-01

    The long-term reliability of land disposal facility final cover systems - and therefore the overall waste containment - depends on the distortions imposed on these systems by differential settlement/subsidence. The evaluation of differential settlement is challenging because of the heterogeneity of the waste mass (caused by inconsistent compaction, void space distribution, debris-soil mix ratio, waste material stiffness, time-dependent primary compression of the fine-grained soil matrix, long-term creep settlement of the soil matrix and the debris, etc.) at most land disposal facilities. Deterministic approaches to long-term final cover settlement prediction are not able to capture the spatial variability in the waste mass and sub-grade properties which control differential settlement. An alternative, probabilistic solution is to use random fields to model the waste and sub-grade properties. The modeling effort informs the design, construction, operation, and maintenance of land disposal facilities. A probabilistic method to establish design criteria for waste placement and compaction is introduced using the model. Random fields are ideally suited to problems of differential settlement modeling of highly heterogeneous foundations, such as waste. Random fields model the seemingly random spatial distribution of a design parameter, such as compressibility. When used for design, the use of these models prompts the need for probabilistic design criteria. It also allows for a statistical approach to waste placement acceptance criteria. An example design evaluation was performed, illustrating the use of the probabilistic differential settlement simulation methodology to assemble a design guidance chart. The purpose of this design evaluation is to enable the designer to select optimal initial combinations of design slopes and quality control acceptance criteria that yield an acceptable proportion of post-settlement slopes meeting some design minimum. For this specific

  5. Self-sustained volume discharge in SF{sub 6}-based gas mixtures upon the development of shock-wave perturbations of the medium initiated by a pulsed CO{sub 2} laser

    SciTech Connect

    Belevtsev, A A; Kazantsev, S Yu; Kononov, I G; Firsov, K N E-mail: kazan@kapella.gpi.r

    2006-07-31

    A self-sustained volume discharge in SF{sub 6} mixtures with C{sub 2}H{sub 6}, He, and Ne preliminarily irradiated by CO{sub 2} laser pulses was investigated. The radiation energy density absorbed by SF{sub 6} in the discharge ignition region amounted to 6.5 J atm{sup -1} cm{sup -3}. The discharge structure and the current distribution in the discharge gap were found to change radically with increasing the time delay between the laser and discharge pulses. In particular, brightly glowing narrow channels are formed at the boundary of the irradiation region. The observed effect is shown to arise from the development of a shock-wave process due to a temperature jump at the boundary between the irradiated and unirradiated gas. The velocities of shock wave propagation and the main thermodynamic gas parameters in the perturbation region were calculated. A comparison was made between the calculated and measured velocities of the shock waves. (special issue devoted to the 90th anniversary of a.m. prokhorov)

  6. Synaptic and nonsynaptic plasticity approximating probabilistic inference

    PubMed Central

    Tully, Philip J.; Hennig, Matthias H.; Lansner, Anders

    2014-01-01

    Learning and memory operations in neural circuits are believed to involve molecular cascades of synaptic and nonsynaptic changes that lead to a diverse repertoire of dynamical phenomena at higher levels of processing. Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability all conspire to form and maintain memories. But it is still unclear how these seemingly redundant mechanisms could jointly orchestrate learning in a more unified system. To this end, a Hebbian learning rule for spiking neurons inspired by Bayesian statistics is proposed. In this model, synaptic weights and intrinsic currents are adapted on-line upon arrival of single spikes, which initiate a cascade of temporally interacting memory traces that locally estimate probabilities associated with relative neuronal activation levels. Trace dynamics enable synaptic learning to readily demonstrate a spike-timing dependence, stably return to a set-point over long time scales, and remain competitive despite this stability. Beyond unsupervised learning, linking the traces with an external plasticity-modulating signal enables spike-based reinforcement learning. At the postsynaptic neuron, the traces are represented by an activity-dependent ion channel that is shown to regulate the input received by a postsynaptic cell and generate intrinsic graded persistent firing levels. We show how spike-based Hebbian-Bayesian learning can be performed in a simulated inference task using integrate-and-fire (IAF) neurons that are Poisson-firing and background-driven, similar to the preferred regime of cortical neurons. Our results support the view that neurons can represent information in the form of probability distributions, and that probabilistic inference could be a functional by-product of coupled synaptic and nonsynaptic mechanisms operating over several timescales. The model provides a biophysical realization of Bayesian computation by reconciling several observed neural phenomena whose

  7. Cold Osmotic Shock in Saccharomyces cerevisiae

    PubMed Central

    Patching, J. W.; Rose, A. H.

    1971-01-01

    Saccharomyces cerevisiae NCYC 366 is susceptible to cold osmotic shock. Exponentially growing cells from batch cultures grown in defined medium at 30 C, after being suspended in 0.8 m mannitol containing 10 mm ethylenedia-minetetraacetic acid and then resuspended in ice-cold 0.5 mm MgCl2, accumulated the nonmetabolizable solutes d-glucosamine-hydrochloride and 2-aminoisobutyrate at slower rates than unshocked cells; shocked cells retained their viability. Storage of unshocked batch-grown cells in buffer at 10 C led to an increase in ability to accumulate glucosamine, and further experiments were confined to cells grown in a chemostat under conditions of glucose limitation, thereby obviating the need for storing cells before use. A study was made of the effect of the different stages in the cold osmotic shock procedure, including the osmotic stress, the chelating agent, and the cold Mg2+-containing diluent, on viability and solute-accumulating ability. Growth of shocked cells in defined medium resembled that of unshocked cells; however, in malt extract-yeast extract-glucose-peptone medium, the shocked cells had a longer lag phase of growth and initially grew at a slower rate. Cold osmotic shock caused the release of low-molecular-weight compounds and about 6 to 8% of the cell protein. Neither the cell envelope enzymes, invertase, acid phosphatase and l-leucine-β-naphthylamidase, nor the cytoplasmic enzyme, alkaline phosphatase, were released when yeast cells were subjected to cold osmotic shock. PMID:5001201

  8. A hydrocode study of explosive shock ignition

    NASA Astrophysics Data System (ADS)

    Butler, George C.; Horie, Yasuyuki

    2012-03-01

    This paper discusses the results of hydrocode simulations of shock-induced ignition of PBXN-109, Octol, PETN, and HNS explosives using the History Variable Reactive Burn model in the CTH hydrocode. Normalized values of pressure and time were derived from the equations defining the HVRB model, and used to define an upper bound for ignition. This upper bound corresponds to the well established Pop-plot data for supported detonation, i.e. detonations in which a constant shock pressure is applied to an explosive until full detonation is achieved. Subsequently, one-dimensional flyer-plate simulations were conducted in which the responses to varied constant-amplitude, limitedduration shock pulses into semi-infinite explosive samples were examined. These simulations confirmed not only the existence of an upper bound for ignition as expected, but also showed ignition by "lower level" shocks, in which full detonation is reached at a time longer than the input shock duration. These lower-level shocks can be used to define a distinct minimal ignition threshold, below which shock pulses do not result in detonation. Numerical experiments using these bounds offer a new framework for interpreting explosive initiation data.

  9. Shock desensitizing of solid explosives

    SciTech Connect

    Davis, William C

    2010-01-01

    Solid explosive can be desensitized by a shockwave too weak to initiate it promptly, and desensitized explosive does not react although its chemical composition is almost unchanged. A strong second shock does not cause reaction until it overtakes the first shock. The first shock, if it is strong enough, accelerates very slowly at first, and then more rapidly as detonation approaches. These facts suggest that there are two competing reactions. One is the usual explosive goes to products with the release of energy, and the other is explosive goes to dead explosive with no chemical change and no energy release. The first reaction rate is very sensitive to the local state, and the second is only weakly so. At low pressure very little energy is released and the change to dead explosive dominates. At high pressure, quite the other way, most of the explosive goes to products. Numerous experiments in both the initiation and the full detonation regimes are discussed and compared in support of these ideas.

  10. Probabilistic Survivability Versus Time Modeling

    NASA Technical Reports Server (NTRS)

    Joyner, James J., Sr.

    2015-01-01

    This technical paper documents Kennedy Space Centers Independent Assessment team work completed on three assessments for the Ground Systems Development and Operations (GSDO) Program to assist the Chief Safety and Mission Assurance Officer (CSO) and GSDO management during key programmatic reviews. The assessments provided the GSDO Program with an analysis of how egress time affects the likelihood of astronaut and worker survival during an emergency. For each assessment, the team developed probability distributions for hazard scenarios to address statistical uncertainty, resulting in survivability plots over time. The first assessment developed a mathematical model of probabilistic survivability versus time to reach a safe location using an ideal Emergency Egress System at Launch Complex 39B (LC-39B); the second used the first model to evaluate and compare various egress systems under consideration at LC-39B. The third used a modified LC-39B model to determine if a specific hazard decreased survivability more rapidly than other events during flight hardware processing in Kennedys Vehicle Assembly Building (VAB).Based on the composite survivability versus time graphs from the first two assessments, there was a soft knee in the Figure of Merit graphs at eight minutes (ten minutes after egress ordered). Thus, the graphs illustrated to the decision makers that the final emergency egress design selected should have the capability of transporting the flight crew from the top of LC 39B to a safe location in eight minutes or less. Results for the third assessment were dominated by hazards that were classified as instantaneous in nature (e.g. stacking mishaps) and therefore had no effect on survivability vs time to egress the VAB. VAB emergency scenarios that degraded over time (e.g. fire) produced survivability vs time graphs that were line with aerospace industry norms.

  11. Probabilistic Modeling of Rosette Formation

    PubMed Central

    Long, Mian; Chen, Juan; Jiang, Ning; Selvaraj, Periasamy; McEver, Rodger P.; Zhu, Cheng

    2006-01-01

    Rosetting, or forming a cell aggregate between a single target nucleated cell and a number of red blood cells (RBCs), is a simple assay for cell adhesion mediated by specific receptor-ligand interaction. For example, rosette formation between sheep RBC and human lymphocytes has been used to differentiate T cells from B cells. Rosetting assay is commonly used to determine the interaction of Fc γ-receptors (FcγR) expressed on inflammatory cells and IgG coated on RBCs. Despite its wide use in measuring cell adhesion, the biophysical parameters of rosette formation have not been well characterized. Here we developed a probabilistic model to describe the distribution of rosette sizes, which is Poissonian. The average rosette size is predicted to be proportional to the apparent two-dimensional binding affinity of the interacting receptor-ligand pair and their site densities. The model has been supported by experiments of rosettes mediated by four molecular interactions: FcγRIII interacting with IgG, T cell receptor and coreceptor CD8 interacting with antigen peptide presented by major histocompatibility molecule, P-selectin interacting with P-selectin glycoprotein ligand 1 (PSGL-1), and L-selectin interacting with PSGL-1. The latter two are structurally similar and are different from the former two. Fitting the model to data enabled us to evaluate the apparent effective two-dimensional binding affinity of the interacting molecular pairs: 7.19 × 10−5 μm4 for FcγRIII-IgG interaction, 4.66 × 10−3 μm4 for P-selectin-PSGL-1 interaction, and 0.94 × 10−3 μm4 for L-selectin-PSGL-1 interaction. These results elucidate the biophysical mechanism of rosette formation and enable it to become a semiquantitative assay that relates the rosette size to the effective affinity for receptor-ligand binding. PMID:16603493

  12. Permeability enhancement by shock cooling

    NASA Astrophysics Data System (ADS)

    Griffiths, Luke; Heap, Michael; Reuschlé, Thierry; Baud, Patrick; Schmittbuhl, Jean

    2015-04-01

    The permeability of an efficient reservoir, e.g. a geothermal reservoir, should be sufficient to permit the circulation of fluids. Generally speaking, permeability decreases over the life cycle of the geothermal system. As a result, is usually necessary to artificially maintain and enhance the natural permeability of these systems. One of the methods of enhancement -- studied here -- is thermal stimulation (injecting cold water at low pressure). This goal of this method is to encourage new thermal cracks within the reservoir host rocks, thereby increasing reservoir permeability. To investigate the development of thermal microcracking in the laboratory we selected two granites: a fine-grained (Garibaldi Grey granite, grain size = 0.5 mm) and a course-grained granite (Lanhelin granite, grain size = 2 mm). Both granites have an initial porosity of about 1%. Our samples were heated to a range of temperatures (100-1000 °C) and were either cooled slowly (1 °C/min) or shock cooled (100 °C/s). A systematic microstructural (2D crack area density, using standard stereological techniques, and 3D BET specific surface area measurements) and rock physical property (porosity, P-wave velocity, uniaxial compressive strength, and permeability) analysis was undertaken to understand the influence of slow and shock cooling on our reservoir granites. Microstructurally, we observe that the 2D crack surface area per unit volume and the specific surface area increase as a result of thermal stressing, and, for the same maximum temperature, crack surface area is higher in the shock cooled samples. This observation is echoed by our rock physical property measurements: we see greater changes for the shock cooled samples. We can conclude that shock cooling is an extremely efficient method of generating thermal microcracks and modifying rock physical properties. Our study highlights that thermal treatments are likely to be an efficient method for the "matrix" permeability enhancement of

  13. Radiant properties of strong shock waves in argon.

    PubMed

    Taylor, W H; Kane, J W

    1967-09-01

    Measurements of the visible radiation emitted by one dimensional, explosively generated, shock waves in argon initially at 1 atm are reported. A time-resolved spectrograph and calibrated photodetectors were used to measure the intensity of the source at 5450 A and 4050 A. The results show that explosive induced shock waves in argon having shock velocities in the range 8-9 mm/microusec radiate at these wavelengths like a blackbody having a temperature of approximately 23,000 degrees K.

  14. Shock front nonstationarity of supercritical perpendicular shocks

    NASA Astrophysics Data System (ADS)

    Hada, Tohru; Oonishi, Makiko; LembèGe, Bertrand; Savoini, Philippe

    2003-06-01

    The shock front nonstationarity of perpendicular shocks in super-critical regime is analyzed by examining the coupling between "incoming" and "reflected" ion populations. For a given set of parameters including the upstream Mach number (MA) and the fraction α of reflected to incoming ions, a self-consistent, time-stationary solution of the coupling between ion streams and the electromagnetic field is sought for. If such a solution is found, the shock is stationary; otherwise, the shock is nonstationary, leading to a self-reforming shock front often observed in full particle simulations of quasi-perpendicular shocks. A parametric study of this numerical model allows us to define a critical αcrit between stationary and nonstationary regimes. The shock can be nonstationary even for relatively low MA(2-5). For a moderate MA(5-10), the critical value αcrit is about 15 to 20%. For very high MA (>10), αcrit saturates around 20%. Moreover, present full simulations show that self-reformation of the shock front occurs for relatively low βi and disappears for high βi, where βi is the ratio of upstream ion plasma to magnetic field pressures. Results issued from the present theoretical model are found to be in good agreement with full particle simulations for low βi case; this agreement holds as long as the motion of reflected ions is coherent enough (narrow ion ring) to be described by a single population in the model. The present model reveals to be "at variance" with full particle simulations results for the high βi case. Present results are also compared with previous hybrid simulations.

  15. Shock Propagation and Instability Structures in Compressed Silica Aerogels

    SciTech Connect

    Howard, W M; Molitoris, J D; DeHaven, M R; Gash, A E; Satcher, J H

    2002-05-30

    We have performed a series of experiments examining shock propagation in low density aerogels. High-pressure ({approx}100 kbar) shock waves are produced by detonating high explosives. Radiography is used to obtain a time sequence imaging of the shocks as they enter and traverse the aerogel. We compress the aerogel by impinging shocks waves on either one or both sides of an aerogel slab. The shock wave initially transmitted to the aerogel is very narrow and flat, but disperses and curves as it propagates. Optical images of the shock front reveal the initial formation of a hot dense region that cools and evolves into a well-defined microstructure. Structures observed in the shock front are examined in the framework of hydrodynamic instabilities generated as the shock traverses the low-density aerogel. The primary features of shock propagation are compared to simulations, which also include modeling the detonation of the high explosive, with a 2-D Arbitrary Lagrange Eulerian hydrodynamics code The code includes a detailed thermochemical equation of state and rate law kinetics. We will present an analysis of the data from the time resolved imaging diagnostics and form a consistent picture of the shock transmission, propagation and instability structure.

  16. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  17. Probabilistic numerics and uncertainty in computations.

    PubMed

    Hennig, Philipp; Osborne, Michael A; Girolami, Mark

    2015-07-08

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

  18. Detonation in shocked homogeneous high explosives

    SciTech Connect

    Yoo, C.S.; Holmes, N.C.; Souers, P.C.

    1995-11-01

    We have studied shock-induced changes in homogeneous high explosives including nitromethane, tetranitromethane, and single crystals of pentaerythritol tetranitrate (PETN) by using fast time-resolved emission and Raman spectroscopy at a two-stage light-gas gun. The results reveal three distinct steps during which the homogeneous explosives chemically evolve to final detonation products. These are (1) the initiation of shock compressed high explosives after an induction period, (2) thermal explosion of shock-compressed and/or reacting materials, and (3) a decay to a steady-state representing a transition to the detonation of uncompressed high explosives. Based on a gray-body approximation, we have obtained the CJ temperatures: 3800 K for nitromethane, 2950 K for tetranitromethane, and 4100 K for PETN. We compare the data with various thermochemical equilibrium calculations. In this paper we will also show a preliminary result of single-shot time-resolved Raman spectroscopy applied to shock-compressed nitromethane.

  19. Anti-shock garment in postpartum haemorrhage.

    PubMed

    Miller, Suellen; Martin, Hilarie B; Morris, Jessica L

    2008-12-01

    The non-pneumatic anti-shock garment (NASG) is a first-aid device that reverses hypovolaemic shock and decreases obstetric haemorrhage. It consists of articulated neoprene segments that close tightly with Velcro, shunting blood from the lower body to the core organs, elevating blood pressure and increasing preload and cardiac output. This chapter describes the controversial history of the predecessors of NASG, pneumatic anti-shock garments (PASGs), relates case studies of PASG for obstetric haemorrhage, compares pneumatic and non-pneumatic devices and posits why the NASG is more appropriate for low-resource settings. This chapter discusses the only evidence available about NASGs for obstetric haemorrhage - two pre-post pilot trials and three case series - and describes recently initiated randomized cluster trials in Africa. Instructions and an algorithm for ASGs in haemorrhage and shock management are included. Much remains unknown about the NASG, a promising intervention for obstetric haemorrhage management.

  20. Cluster Observations and Kinetic Simulations of Slow Mode Shocks at the Earth's Bow Shock

    NASA Astrophysics Data System (ADS)

    Kucharek, H.; Mouikis, C.; Scholer, M.; Eriksson, S.

    2007-12-01

    Slow shocks in association with reconnection are thought to be the main engine of the plasma heating, acceleration and dynamical changes of the magnetosphere. It is therefore very important to study their structure and the related physical processes. Measurements, from CLUSTER spacecraft show clear evidence for slow- mode shocks associated with magnetic reconnection in the near Earth magnetotail in connection with a substorm onset [S. Eriksson et al., 2004]. Most of the knowledge of slow mode shocks was derived from two-fluid theory together with extensive small-scale hybrid simulations of the shock transition. However, it is difficult to perform numerical simulations under realistic plasma conditions. First, it is important to use realistic proton electron mass ratios to study downstream electron heating. Second, the method to initiate a slow mode shock is Might have an impact of the obtained results. Using the piston method the slow mode shock will run into the downstream region of a preceding fast shock wave. Using switch off shock conditions the slow mode shock will run into a quite plasma environment. However, at the Earth's bow shock turbulence will always be around. Finally, dimensional effects in numerical simulations have effects on the results. For instance in 1D simulations all wave vectors are forced in the simulation direction. Furthermore, full particle simulations predict lower downstream ion temperature than predicted by hybrid simulations. This is attributed to electron kinetic processes. We performed a number of 1D full particle simulations with real proton to electron mass ratio and multi-dimensional hybrid simulations to address the above-mentioned topics. As input for the numerical simulations we used plasma- parameters from the established Cluster database. We investigated the dynamics, the structure, and the evolution of the simulated slow mode shocks using the piston and the "switch-off" method for hybrid and the 1D full particle simulations

  1. Visualizing Uncertainty for Probabilistic Weather Forecasting based on Reforecast Analogs

    NASA Astrophysics Data System (ADS)

    Pelorosso, Leandro; Diehl, Alexandra; Matković, Krešimir; Delrieux, Claudio; Ruiz, Juan; Gröeller, M. Eduard; Bruckner, Stefan

    2016-04-01

    Numerical weather forecasts are prone to uncertainty coming from inaccuracies in the initial and boundary conditions and lack of precision in numerical models. Ensemble of forecasts partially addresses these problems by considering several runs of the numerical model. Each forecast is generated with different initial and boundary conditions and different model configurations [GR05]. The ensembles can be expressed as probabilistic forecasts, which have proven to be very effective in the decision-making processes [DE06]. The ensemble of forecasts represents only some of the possible future atmospheric states, usually underestimating the degree of uncertainty in the predictions [KAL03, PH06]. Hamill and Whitaker [HW06] introduced the "Reforecast Analog Regression" (RAR) technique to overcome the limitations of ensemble forecasting. This technique produces probabilistic predictions based on the analysis of historical forecasts and observations. Visual analytics provides tools for processing, visualizing, and exploring data to get new insights and discover hidden information patterns in an interactive exchange between the user and the application [KMS08]. In this work, we introduce Albero, a visual analytics solution for probabilistic weather forecasting based on the RAR technique. Albero targets at least two different type of users: "forecasters", who are meteorologists working in operational weather forecasting and "researchers", who work in the construction of numerical prediction models. Albero is an efficient tool for analyzing precipitation forecasts, allowing forecasters to make and communicate quick decisions. Our solution facilitates the analysis of a set of probabilistic forecasts, associated statistical data, observations and uncertainty. A dashboard with small-multiples of probabilistic forecasts allows the forecasters to analyze at a glance the distribution of probabilities as a function of time, space, and magnitude. It provides the user with a more

  2. Probabilistic Modeling of the Renal Stone Formation Module

    NASA Technical Reports Server (NTRS)

    Best, Lauren M.; Myers, Jerry G.; Goodenow, Debra A.; McRae, Michael P.; Jackson, Travis C.

    2013-01-01

    The Integrated Medical Model (IMM) is a probabilistic tool, used in mission planning decision making and medical systems risk assessments. The IMM project maintains a database of over 80 medical conditions that could occur during a spaceflight, documenting an incidence rate and end case scenarios for each. In some cases, where observational data are insufficient to adequately define the inflight medical risk, the IMM utilizes external probabilistic modules to model and estimate the event likelihoods. One such medical event of interest is an unpassed renal stone. Due to a high salt diet and high concentrations of calcium in the blood (due to bone depletion caused by unloading in the microgravity environment) astronauts are at a considerable elevated risk for developing renal calculi (nephrolithiasis) while in space. Lack of observed incidences of nephrolithiasis has led HRP to initiate the development of the Renal Stone Formation Module (RSFM) to create a probabilistic simulator capable of estimating the likelihood of symptomatic renal stone presentation in astronauts on exploration missions. The model consists of two major parts. The first is the probabilistic component, which utilizes probability distributions to assess the range of urine electrolyte parameters and a multivariate regression to transform estimated crystal density and size distributions to the likelihood of the presentation of nephrolithiasis symptoms. The second is a deterministic physical and chemical model of renal stone growth in the kidney developed by Kassemi et al. The probabilistic component of the renal stone model couples the input probability distributions describing the urine chemistry, astronaut physiology, and system parameters with the physical and chemical outputs and inputs to the deterministic stone growth model. These two parts of the model are necessary to capture the uncertainty in the likelihood estimate. The model will be driven by Monte Carlo simulations, continuously

  3. Compaction shock dissipation in low density granular explosive

    NASA Astrophysics Data System (ADS)

    Rao, Pratap T.; Gonthier, Keith A.; Chakravarthy, Sunada

    2016-06-01

    The microstructure of granular explosives can affect dissipative heating within compaction shocks that can trigger combustion and initiate detonation. Because initiation occurs over distances that are much larger than the mean particle size, homogenized (macroscale) theories are often used to describe local thermodynamic states within and behind shocks that are regarded as the average manifestation of thermodynamic fields at the particle scale. In this paper, mesoscale modeling and simulation are used to examine how the initial packing density of granular HMX (C4H8N8O8) C4H8N8O8 having a narrow particle size distribution influences dissipation within resolved, planar compaction shocks. The model tracks the evolution of thermomechanical fields within large ensembles of particles due to pore collapse. Effective shock profiles, obtained by averaging mesoscale fields over space and time, are compared with those given by an independent macroscale compaction theory that predicts the variation in effective thermomechanical fields within shocks due to an imbalance between the solid pressure and a configurational stress. Reducing packing density is shown to reduce the dissipation rate within shocks but increase the integrated dissipated work over shock rise times, which is indicative of enhanced sensitivity. In all cases, dissipated work is related to shock pressure by a density-dependent power law, and shock rise time is related to pressure by a power law having an exponent of negative one.

  4. EXPERIMENTAL STUDY OF SHOCK WAVE DYNAMICS IN MAGNETIZED PLASMAS

    SciTech Connect

    Nirmol K. Podder

    2009-03-17

    In this four-year project (including one-year extension), the project director and his research team built a shock-wave-plasma apparatus to study shock wave dynamics in glow discharge plasmas in nitrogen and argon at medium pressure (1–20 Torr), carried out various plasma and shock diagnostics and measurements that lead to increased understanding of the shock wave acceleration phenomena in plasmas. The measurements clearly show that in the steady-state dc glow discharge plasma, at fixed gas pressure the shock wave velocity increases, its amplitude decreases, and the shock wave disperses non-linearly as a function of the plasma current. In the pulsed discharge plasma, at fixed gas pressure the shock wave dispersion width and velocity increase as a function of the delay between the switch-on of the plasma and shock-launch. In the afterglow plasma, at fixed gas pressure the shock wave dispersion width and velocity decrease as a function of the delay between the plasma switch-off and shock-launch. These changes are found to be opposite and reversing towards the room temperature value which is the initial condition for plasma ignition case. The observed shock wave properties in both igniting and afterglow plasmas correlate well with the inferred temperature changes in the two plasmas.

  5. Probabilistic analysis of structures involving random stress-strain behavior

    NASA Technical Reports Server (NTRS)

    Millwater, H. R.; Thacker, B. H.; Harren, S. V.

    1991-01-01

    The present methodology for analysis of structures with random stress strain behavior characterizes the uniaxial stress-strain curve in terms of (1) elastic modulus, (2) engineering stress at initial yield, (3) initial plastic-hardening slope, (4) engineering stress at point of ultimate load, and (5) engineering strain at point of ultimate load. The methodology is incorporated into the Numerical Evaluation of Stochastic Structures Under Stress code for probabilistic structural analysis. The illustrative problem of a thick cylinder under internal pressure, where both the internal pressure and the stress-strain curve are random, is addressed by means of the code. The response value is the cumulative distribution function of the equivalent plastic strain at the inner radius.

  6. Structure of intermediate shocks and slow shocks in a magnetized plasma with heat conduction

    SciTech Connect

    Tsai, C.L.; Wu, B.H.; Lee, L.C.

    2005-08-15

    The structure of slow shocks and intermediate shocks in the presence of a heat conduction parallel to the local magnetic field is simulated from the set of magnetohydrodynamic equations. This study is an extension of an earlier work [C. L. Tsai, R. H. Tsai, B. H. Wu, and L. C. Lee, Phys. Plasmas 9, 1185 (2002)], in which the effects of heat conduction are examined for the case that the tangential magnetic fields on the two side of initial current sheet are exactly antiparallel (B{sub y}=0). For the B{sub y}=0 case, a pair of slow shocks is formed as the result of evolution of the initial current sheet, and each slow shock consists of two parts: the isothermal main shock and the foreshock. In the present paper, cases with B{sub y}{ne}0 are also considered, in which the evolution process leads to the presence of an additional pair of time-dependent intermediate shocks (TDISs). Across the main shock of the slow shock, jumps in plasma density, velocity, and magnetic field are significant, but the temperature is continuous. The plasma density downstream of the main shock decreases with time, while the downstream temperature increases with time, keeping the downstream pressure constant. The foreshock is featured by a smooth temperature variation and is formed due to the heat flow from downstream to upstream region. In contrast to the earlier study, the foreshock is found to reach a steady state with a constant width in the slow shock frame. In cases with B{sub y}{ne}0, the plasma density and pressure increase and the magnetic field decreases across TDIS. The TDIS initially can be embedded in the slow shock's foreshock structure, and then moves out of the foreshock region. With an increasing B{sub y}, the propagation speed of foreshock leading edge tends to decrease and the foreshock reaches its steady state at an earlier time. Both the pressure and temperature downstreams of the main shock decrease with increasing B{sub y}. The results can be applied to the shock heating

  7. Structure of intermediate shocks and slow shocks in a magnetized plasma with heat conduction

    NASA Astrophysics Data System (ADS)

    Tsai, C. L.; Wu, B. H.; Lee, L. C.

    2005-08-01

    The structure of slow shocks and intermediate shocks in the presence of a heat conduction parallel to the local magnetic field is simulated from the set of magnetohydrodynamic equations. This study is an extension of an earlier work [C. L. Tsai, R. H. Tsai, B. H. Wu, and L. C. Lee, Phys. Plasmas 9, 1185 (2002)], in which the effects of heat conduction are examined for the case that the tangential magnetic fields on the two side of initial current sheet are exactly antiparallel (By=0). For the By=0 case, a pair of slow shocks is formed as the result of evolution of the initial current sheet, and each slow shock consists of two parts: the isothermal main shock and the foreshock. In the present paper, cases with By≠0 are also considered, in which the evolution process leads to the presence of an additional pair of time-dependent intermediate shocks (TDISs). Across the main shock of the slow shock, jumps in plasma density, velocity, and magnetic field are significant, but the temperature is continuous. The plasma density downstream of the main shock decreases with time, while the downstream temperature increases with time, keeping the downstream pressure constant. The foreshock is featured by a smooth temperature variation and is formed due to the heat flow from downstream to upstream region. In contrast to the earlier study, the foreshock is found to reach a steady state with a constant width in the slow shock frame. In cases with By≠0, the plasma density and pressure increase and the magnetic field decreases across TDIS. The TDIS initially can be embedded in the slow shock's foreshock structure, and then moves out of the foreshock region. With an increasing By, the propagation speed of foreshock leading edge tends to decrease and the foreshock reaches its steady state at an earlier time. Both the pressure and temperature downstreams of the main shock decrease with increasing By. The results can be applied to the shock heating in the solar corona and solar wind.

  8. Probabilistic fatigue life prediction of metallic and composite materials

    NASA Astrophysics Data System (ADS)

    Xiang, Yibing

    Fatigue is one of the most common failure modes for engineering structures, such as aircrafts, rotorcrafts and aviation transports. Both metallic materials and composite materials are widely used and affected by fatigue damage. Huge uncertainties arise from material properties, measurement noise, imperfect models, future anticipated loads and environmental conditions. These uncertainties are critical issues for accurate remaining useful life (RUL) prediction for engineering structures in service. Probabilistic fatigue prognosis considering various uncertainties is of great importance for structural safety. The objective of this study is to develop probabilistic fatigue life prediction models for metallic materials and composite materials. A fatigue model based on crack growth analysis and equivalent initial flaw size concept is proposed for metallic materials. Following this, the developed model is extended to include structural geometry effects (notch effect), environmental effects (corroded specimens) and manufacturing effects (shot peening effects). Due to the inhomogeneity and anisotropy, the fatigue model suitable for metallic materials cannot be directly applied to composite materials. A composite fatigue model life prediction is proposed based on a mixed-mode delamination growth model and a stiffness degradation law. After the development of deterministic fatigue models of metallic and composite materials, a general probabilistic life prediction methodology is developed. The proposed methodology combines an efficient Inverse First-Order Reliability Method (IFORM) for the uncertainty propogation in fatigue life prediction. An equivalent stresstransformation has been developed to enhance the computational efficiency under realistic random amplitude loading. A systematical reliability-based maintenance optimization framework is proposed for fatigue risk management and mitigation of engineering structures.

  9. Exploration of Advanced Probabilistic and Stochastic Design Methods

    NASA Technical Reports Server (NTRS)

    Mavris, Dimitri N.

    2003-01-01

    The primary objective of the three year research effort was to explore advanced, non-deterministic aerospace system design methods that may have relevance to designers and analysts. The research pursued emerging areas in design methodology and leverage current fundamental research in the area of design decision-making, probabilistic modeling, and optimization. The specific focus of the three year investigation was oriented toward methods to identify and analyze emerging aircraft technologies in a consistent and complete manner, and to explore means to make optimal decisions based on this knowledge in a probabilistic environment. The research efforts were classified into two main areas. First, Task A of the grant has had the objective of conducting research into the relative merits of possible approaches that account for both multiple criteria and uncertainty in design decision-making. In particular, in the final year of research, the focus was on the comparison and contrasting between three methods researched. Specifically, these three are the Joint Probabilistic Decision-Making (JPDM) technique, Physical Programming, and Dempster-Shafer (D-S) theory. The next element of the research, as contained in Task B, was focused upon exploration of the Technology Identification, Evaluation, and Selection (TIES) methodology developed at ASDL, especially with regards to identification of research needs in the baseline method through implementation exercises. The end result of Task B was the documentation of the evolution of the method with time and a technology transfer to the sponsor regarding the method, such that an initial capability for execution could be obtained by the sponsor. Specifically, the results of year 3 efforts were the creation of a detailed tutorial for implementing the TIES method. Within the tutorial package, templates and detailed examples were created for learning and understanding the details of each step. For both research tasks, sample files and

  10. Probabilistic Exposure Analysis for Chemical Risk Characterization

    PubMed Central

    Bogen, Kenneth T.; Cullen, Alison C.; Frey, H. Christopher; Price, Paul S.

    2009-01-01

    This paper summarizes the state of the science of probabilistic exposure assessment (PEA) as applied to chemical risk characterization. Current probabilistic risk analysis methods applied to PEA are reviewed. PEA within the context of risk-based decision making is discussed, including probabilistic treatment of related uncertainty, interindividual heterogeneity, and other sources of variability. Key examples of recent experience gained in assessing human exposures to chemicals in the environment, and other applications to chemical risk characterization and assessment, are presented. It is concluded that, although improvements continue to be made, existing methods suffice for effective application of PEA to support quantitative analyses of the risk of chemically induced toxicity that play an increasing role in key decision-making objectives involving health protection, triage, civil justice, and criminal justice. Different types of information required to apply PEA to these different decision contexts are identified, and specific PEA methods are highlighted that are best suited to exposure assessment in these separate contexts. PMID:19223660

  11. bayesPop: Probabilistic Population Projections

    PubMed Central

    Ševčíková, Hana; Raftery, Adrian E.

    2016-01-01

    We describe bayesPop, an R package for producing probabilistic population projections for all countries. This uses probabilistic projections of total fertility and life expectancy generated by Bayesian hierarchical models. It produces a sample from the joint posterior predictive distribution of future age- and sex-specific population counts, fertility rates and mortality rates, as well as future numbers of births and deaths. It provides graphical ways of summarizing this information, including trajectory plots and various kinds of probabilistic population pyramids. An expression language is introduced which allows the user to produce the predictive distribution of a wide variety of derived population quantities, such as the median age or the old age dependency ratio. The package produces aggregated projections for sets of countries, such as UN regions or trading blocs. The methodology has been used by the United Nations to produce their most recent official population projections for all countries, published in the World Population Prospects. PMID:28077933

  12. bayesPop: Probabilistic Population Projections.

    PubMed

    Ševčíková, Hana; Raftery, Adrian E

    2016-12-01

    We describe bayesPop, an R package for producing probabilistic population projections for all countries. This uses probabilistic projections of total fertility and life expectancy generated by Bayesian hierarchical models. It produces a sample from the joint posterior predictive distribution of future age- and sex-specific population counts, fertility rates and mortality rates, as well as future numbers of births and deaths. It provides graphical ways of summarizing this information, including trajectory plots and various kinds of probabilistic population pyramids. An expression language is introduced which allows the user to produce the predictive distribution of a wide variety of derived population quantities, such as the median age or the old age dependency ratio. The package produces aggregated projections for sets of countries, such as UN regions or trading blocs. The methodology has been used by the United Nations to produce their most recent official population projections for all countries, published in the World Population Prospects.

  13. A probabilistic approach to spectral graph matching.

    PubMed

    Egozi, Amir; Keller, Yosi; Guterman, Hugo

    2013-01-01

    Spectral Matching (SM) is a computationally efficient approach to approximate the solution of pairwise matching problems that are np-hard. In this paper, we present a probabilistic interpretation of spectral matching schemes and derive a novel Probabilistic Matching (PM) scheme that is shown to outperform previous approaches. We show that spectral matching can be interpreted as a Maximum Likelihood (ML) estimate of the assignment probabilities and that the Graduated Assignment (GA) algorithm can be cast as a Maximum a Posteriori (MAP) estimator. Based on this analysis, we derive a ranking scheme for spectral matchings based on their reliability, and propose a novel iterative probabilistic matching algorithm that relaxes some of the implicit assumptions used in prior works. We experimentally show our approaches to outperform previous schemes when applied to exhaustive synthetic tests as well as the analysis of real image sequences.

  14. Probabilistic Cue Combination: Less is More

    PubMed Central

    Yurovsky, Daniel; Boyer, Ty W.; Smith, Linda B.; Yu, Chen

    2012-01-01

    Learning about the structure of the world requires learning probabilistic relationships: rules in which cues do not predict outcomes with certainty. However, in some cases, the ability to track probabilistic relationships is a handicap, leading adults to perform non-normatively in prediction tasks. For example, in the dilution effect, predictions made from the combination of two cues of different strengths are less accurate than those made from the stronger cue alone. Here we show that dilution is an adult problem; 11-month-old infants combine strong and weak predictors normatively. These results extend and add support for the less is more hypothesis: limited cognitive resources can lead children to represent probabilistic information differently from adults, and this difference in representation can have important downstream consequences for prediction. PMID:23432826

  15. Shock absorber control system

    SciTech Connect

    Nakano, Y.; Ohira, M.; Ushida, M.; Miyagawa, T.; Shimodaira, T.

    1987-01-13

    A shock absorber control system is described for controlling a dampening force of a shock absorber of a vehicle comprising: setting means for setting a desired dampening force changeable within a predetermined range; drive means for driving the shock absorber to change the dampening force of the shock absorber linearly; control means for controlling the drive means in accordance with the desired dampening force when the setting of the desired dampening force has been changed; detecting means for detecting an actual dampening force of the shock absorber; and correcting means for correcting the dampening force of the shock absorber by controlling the drive means in accordance with a difference between the desired dampening force and the detected actual dampening force.

  16. Degradation monitoring using probabilistic inference

    NASA Astrophysics Data System (ADS)

    Alpay, Bulent

    In order to increase safety and improve economy and performance in a nuclear power plant (NPP), the source and extent of component degradations should be identified before failures and breakdowns occur. It is also crucial for the next generation of NPPs, which are designed to have a long core life and high fuel burnup to have a degradation monitoring system in order to keep the reactor in a safe state, to meet the designed reactor core lifetime and to optimize the scheduled maintenance. Model-based methods are based on determining the inconsistencies between the actual and expected behavior of the plant, and use these inconsistencies for detection and diagnostics of degradations. By defining degradation as a random abrupt change from the nominal to a constant degraded state of a component, we employed nonlinear filtering techniques based on state/parameter estimation. We utilized a Bayesian recursive estimation formulation in the sequential probabilistic inference framework and constructed a hidden Markov model to represent a general physical system. By addressing the problem of a filter's inability to estimate an abrupt change, which is called the oblivious filter problem in nonlinear extensions of Kalman filtering, and the sample impoverishment problem in particle filtering, we developed techniques to modify filtering algorithms by utilizing additional data sources to improve the filter's response to this problem. We utilized a reliability degradation database that can be constructed from plant specific operational experience and test and maintenance reports to generate proposal densities for probable degradation modes. These are used in a multiple hypothesis testing algorithm. We then test samples drawn from these proposal densities with the particle filtering estimates based on the Bayesian recursive estimation formulation with the Metropolis Hastings algorithm, which is a well-known Markov chain Monte Carlo method (MCMC). This multiple hypothesis testing

  17. A Probabilistic Cell Tracking Algorithm

    NASA Astrophysics Data System (ADS)

    Steinacker, Reinhold; Mayer, Dieter; Leiding, Tina; Lexer, Annemarie; Umdasch, Sarah

    2013-04-01

    The research described below was carried out during the EU-Project Lolight - development of a low cost, novel and accurate lightning mapping and thunderstorm (supercell) tracking system. The Project aims to develop a small-scale tracking method to determine and nowcast characteristic trajectories and velocities of convective cells and cell complexes. The results of the algorithm will provide a higher accuracy than current locating systems distributed on a coarse scale. Input data for the developed algorithm are two temporally separated lightning density fields. Additionally a Monte Carlo method minimizing a cost function is utilizied which leads to a probabilistic forecast for the movement of thunderstorm cells. In the first step the correlation coefficients between the first and the second density field are computed. Hence, the first field is shifted by all shifting vectors which are physically allowed. The maximum length of each vector is determined by the maximum possible speed of thunderstorm cells and the difference in time for both density fields. To eliminate ambiguities in determination of directions and velocities, the so called Random Walker of the Monte Carlo process is used. Using this method a grid point is selected at random. Moreover, one vector out of all predefined shifting vectors is suggested - also at random but with a probability that is related to the correlation coefficient. If this exchange of shifting vectors reduces the cost function, the new direction and velocity are accepted. Otherwise it is discarded. This process is repeated until the change of cost functions falls below a defined threshold. The Monte Carlo run gives information about the percentage of accepted shifting vectors for all grid points. In the course of the forecast, amplifications of cell density are permitted. For this purpose, intensity changes between the investigated areas of both density fields are taken into account. Knowing the direction and speed of thunderstorm

  18. Why are probabilistic laws governing quantum mechanics and neurobiology?

    NASA Astrophysics Data System (ADS)

    Kröger, Helmut

    2005-08-01

    We address the question: Why are dynamical laws governing in quantum mechanics and in neuroscience of probabilistic nature instead of being deterministic? We discuss some ideas showing that the probabilistic option offers advantages over the deterministic one.

  19. Patient with Takayasu arteritis presented as cardiogenic shock.

    PubMed

    Tacoy, Gulten; Akyel, Ahmet; Tavil, Yusuf; Cengel, Atiye

    2010-11-01

    Takayasu arteritis is a chronic inflammatory disease involving the aorta, its main branches and affects particularly young women. Symptomatic coronary artery disease and cardiogenic shock are rare signs of Takayasu arteritis. We describe a 47-year-old male patient in whom cardiogenic shock was the initial presentation of Takayasu arteritis with coronary, subclavian, celiac and total abdominal aortic occlusion.

  20. Induced thermoluminescence study of experimentally shock-loaded oligoclase

    NASA Technical Reports Server (NTRS)

    Ivliev, A. I.; Kashkarov, L. L.; Badjukov, D. D.

    1993-01-01

    Artificially induced thermoluminescence (TL) in oligoclase samples which were shock-loaded up to 27 GPa was measured. The essential increase of the TL sensitivity in relation to the total gamma-ray irradiation dose was observed only in samples at the 27 GPa pressure. This result can be explained by the initiation of additional radiation damages in the shocked oligoclace crystal lattice.

  1. Towards Probabilistic Modelling in Event-B

    NASA Astrophysics Data System (ADS)

    Tarasyuk, Anton; Troubitsyna, Elena; Laibinis, Linas

    Event-B provides us with a powerful framework for correct-by-construction system development. However, while developing dependable systems we should not only guarantee their functional correctness but also quantitatively assess their dependability attributes. In this paper we investigate how to conduct probabilistic assessment of reliability of control systems modeled in Event-B. We show how to transform an Event-B model into a Markov model amendable for probabilistic reliability analysis. Our approach enables integration of reasoning about correctness with quantitative analysis of reliability.

  2. Probabilistic assessment of uncertain adaptive hybrid composites

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.

    1994-01-01

    Adaptive composite structures using actuation materials, such as piezoelectric fibers, were assessed probabilistically utilizing intraply hybrid composite mechanics in conjunction with probabilistic composite structural analysis. Uncertainties associated with the actuation material as well as the uncertainties in the regular (traditional) composite material properties were quantified and considered in the assessment. Static and buckling analyses were performed for rectangular panels with various boundary conditions and different control arrangements. The probability density functions of the structural behavior, such as maximum displacement and critical buckling load, were computationally simulated. The results of the assessment indicate that improved design and reliability can be achieved with actuation material.

  3. A Probabilistic Approach to Aeropropulsion System Assessment

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.

    2000-01-01

    A probabilistic approach is described for aeropropulsion system assessment. To demonstrate this approach, the technical performance of a wave rotor-enhanced gas turbine engine (i.e. engine net thrust, specific fuel consumption, and engine weight) is assessed. The assessment accounts for the uncertainties in component efficiencies/flows and mechanical design variables, using probability distributions. The results are presented in the form of cumulative distribution functions (CDFs) and sensitivity analyses, and are compared with those from the traditional deterministic approach. The comparison shows that the probabilistic approach provides a more realistic and systematic way to assess an aeropropulsion system.

  4. The probabilistic approach to human reasoning.

    PubMed

    Oaksford, M; Chater, N

    2001-08-01

    A recent development in the cognitive science of reasoning has been the emergence of a probabilistic approach to the behaviour observed on ostensibly logical tasks. According to this approach the errors and biases documented on these tasks occur because people import their everyday uncertain reasoning strategies into the laboratory. Consequently participants' apparently irrational behaviour is the result of comparing it with an inappropriate logical standard. In this article, we contrast the probabilistic approach with other approaches to explaining rationality, and then show how it has been applied to three main areas of logical reasoning: conditional inference, Wason's selection task and syllogistic reasoning.

  5. When shock waves collide

    SciTech Connect

    Martinez, D.; Hartigan, P.; Frank, A.; Hansen, E.; Yirak, K.; Liao, A. S.; Graham, P.; Foster, J.; Wilde, B.; Blue, B.; Rosen, P.; Farley, D.; Paguio, R.

    2016-06-01

    Supersonic outflows from objects as varied as stellar jets, massive stars, and novae often exhibit multiple shock waves that overlap one another. When the intersection angle between two shock waves exceeds a critical value, the system reconfigures its geometry to create a normal shock known as a Mach stem where the shocks meet. Mach stems are important for interpreting emission-line images of shocked gas because a normal shock produces higher postshock temperatures, and therefore a higher-excitation spectrum than does an oblique shock. In this paper, we summarize the results of a series of numerical simulations and laboratory experiments designed to quantify how Mach stems behave in supersonic plasmas that are the norm in astrophysical flows. The experiments test analytical predictions for critical angles where Mach stems should form, and quantify how Mach stems grow and decay as intersection angles between the incident shock and a surface change. While small Mach stems are destroyed by surface irregularities and subcritical angles, larger ones persist in these situations and can regrow if the intersection angle changes to become more favorable. Furthermore, the experimental and numerical results show that although Mach stems occur only over a limited range of intersection angles and size scales, within these ranges they are relatively robust, and hence are a viable explanation for variable bright knots observed in Hubble Space Telescope images at the intersections of some bow shocks in stellar jets.

  6. When shock waves collide

    DOE PAGES

    Martinez, D.; Hartigan, P.; Frank, A.; ...

    2016-06-01

    Supersonic outflows from objects as varied as stellar jets, massive stars, and novae often exhibit multiple shock waves that overlap one another. When the intersection angle between two shock waves exceeds a critical value, the system reconfigures its geometry to create a normal shock known as a Mach stem where the shocks meet. Mach stems are important for interpreting emission-line images of shocked gas because a normal shock produces higher postshock temperatures, and therefore a higher-excitation spectrum than does an oblique shock. In this paper, we summarize the results of a series of numerical simulations and laboratory experiments designed tomore » quantify how Mach stems behave in supersonic plasmas that are the norm in astrophysical flows. The experiments test analytical predictions for critical angles where Mach stems should form, and quantify how Mach stems grow and decay as intersection angles between the incident shock and a surface change. While small Mach stems are destroyed by surface irregularities and subcritical angles, larger ones persist in these situations and can regrow if the intersection angle changes to become more favorable. Furthermore, the experimental and numerical results show that although Mach stems occur only over a limited range of intersection angles and size scales, within these ranges they are relatively robust, and hence are a viable explanation for variable bright knots observed in Hubble Space Telescope images at the intersections of some bow shocks in stellar jets.« less

  7. Anti-Shock Garment

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Ames Research Center developed a prototype pressure suit for hemophiliac children, based on research of astronauts' physiological responses in microgravity. Zoex Corporation picked up the design and patents and developed an anti-shock garment for paramedic use. Marketed by Dyna Med, the suit reverses the effect of shock on the body's blood distribution by applying counterpressure to the legs and abdomen, returning blood to vital organs and stabilizing body pressure until the patient reaches a hospital. The DMAST (Dyna Med Anti-Shock Trousers) employ lower pressure than other shock garments, and are non-inflatable.

  8. Development of an Explosively Driven Sustained Shock Generator for Shock Wave Studies

    NASA Astrophysics Data System (ADS)

    Taylor, P.; Cook, I. T.; Salisbury, D. A.

    2004-07-01

    Investigation of explosive initiation phenomena close to the initiation threshold with explosively driven shock waves is difficult due to the attenuative nature of the pressure input. The design and experimental testing of a sustained shock wave generator based on an explosive plane wave lens and impedance mismatched low density foam and high impedance layers is described. Calibration experiments to develop a 1-D calculational model for the plane wave lens and booster charge were performed. A calculational study was undertaken to determine the sensitivity of the output pulse to plate and foam thicknesses and foam density. A geometry which generates a 24kb almost flat topped shock wave with a duration of over 4μs into the HMX based plastic explosive EDC37 was defined and tested. Experimental shock profile data is compared with pre-shot predictions from the PETRA Eulerian hydrocode incorporating a "snowplough" or simple locking model for the foam. A reasonable match to the observed magnitude and profile of the initial shock is achieved, although the timing of subsequent shock waves is less well matched.

  9. Boosting probabilistic graphical model inference by incorporating prior knowledge from multiple sources.

    PubMed

    Praveen, Paurush; Fröhlich, Holger

    2013-01-01

    Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available.

  10. An overview of engineering concepts and current design algorithms for probabilistic structural analysis

    NASA Technical Reports Server (NTRS)

    Duffy, S. F.; Hu, J.; Hopkins, D. A.

    1995-01-01

    The article begins by examining the fundamentals of traditional deterministic design philosophy. The initial section outlines the concepts of failure criteria and limit state functions two traditional notions that are embedded in deterministic design philosophy. This is followed by a discussion regarding safety factors (a possible limit state function) and the common utilization of statistical concepts in deterministic engineering design approaches. Next the fundamental aspects of a probabilistic failure analysis are explored and it is shown that deterministic design concepts mentioned in the initial portion of the article are embedded in probabilistic design methods. For components fabricated from ceramic materials (and other similarly brittle materials) the probabilistic design approach yields the widely used Weibull analysis after suitable assumptions are incorporated. The authors point out that Weibull analysis provides the rare instance where closed form solutions are available for a probabilistic failure analysis. Since numerical methods are usually required to evaluate component reliabilities, a section on Monte Carlo methods is included to introduce the concept. The article concludes with a presentation of the technical aspects that support the numerical method known as fast probability integration (FPI). This includes a discussion of the Hasofer-Lind and Rackwitz-Fiessler approximations.

  11. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.

    PubMed

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-08-31

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.

  12. An interpretation of electrostatic density shocks in space plasma

    SciTech Connect

    Shi Jiankui; Zhang Tielong; Torkar, Klaus; Liu Zhenxing

    2005-08-15

    A physical model of electrostatic shocks observed in space plasma is established by deriving the 'Sagdeev potential' from the magnetohydrodynamic equations in a cylindrical coordinate system. The results show that the electrostatic density shock and its corresponding solitary electric-field structure can develop from an ion acoustic wave or an ion cyclotron wave if the Mach number and the initial electric field satisfy some conditions. Some features of the shock wave are discussed. The result can be used to interpret the electrostatic shock observed in geospace plasma.

  13. Free-surface light emission from shocked Teflon

    NASA Astrophysics Data System (ADS)

    Gallagher, Kathleen G.; Yang, Wenbo; Ahrens, Thomas J.

    1994-07-01

    Shock initiated light emission experiments were performed on Teflon shock loaded to pressures up to ˜17 GPa. Radiances up to 600×106Wṡm-2/(ster ṡnm), were measured over a range of 390 to 820 nm. We have measured the spectra of light emitted upon reflection of the shock at the free surface and observed it to be distinctly non-thermal in nature. The lights appears to result from bond destruction such as observed in shock recovery experiments on Teflon and in quasistatic experiments conducted on other polymers.

  14. Ultrafast dynamic ellipsometry and spectroscopy of laser shocked materials

    SciTech Connect

    Bolme, Cynthia A; Mc Grane, Shawn D; Dang, Nhan C; Whitley, Von H; Moore, David S.

    2011-01-20

    Ultrafast dynamic ellipsometry is used to measure the material motion and changes in the optical refractive index of laser shock compressed materials. This diagnostic has shown us that the ultrafast laser driven shocks are the same as shocks on longer timescales and larger length scales. We have added spectroscopic diagnostics of infrared absorption, ultra-violet - visible transient absorption, and femtosecond stimulated Raman scattering to begin probing the initiation chemistry that occurs in shock reactive materials. We have also used the femtosecond stimulated Raman scattering to measure the vibrational temperature of materials using the Stokes gain to anti-Stokes loss ratio.

  15. Proton Acceleration at Oblique Shocks

    NASA Astrophysics Data System (ADS)

    Galinsky, V. L.; Shevchenko, V. I.

    2011-06-01

    Acceleration at the shock waves propagating oblique to the magnetic field is studied using a recently developed theoretical/numerical model. The model assumes that resonant hydromagnetic wave-particle interaction is the most important physical mechanism relevant to motion and acceleration of particles as well as to excitation and damping of waves. The treatment of plasma and waves is self-consistent and time dependent. The model uses conservation laws and resonance conditions to find where waves will be generated or damped, and hence particles will be pitch-angle-scattered. The total distribution is included in the model and neither introduction of separate population of seed particles nor some ad hoc escape rate of accelerated particles is needed. Results of the study show agreement with diffusive shock acceleration models in the prediction of power spectra for accelerated particles in the upstream region. However, they also reveal the presence of spectral break in the high-energy part of the spectra. The role of the second-order Fermi-like acceleration at the initial stage of the acceleration is discussed. The test case used in the paper is based on ISEE-3 data collected for the shock of 1978 November 12.

  16. PROTON ACCELERATION AT OBLIQUE SHOCKS

    SciTech Connect

    Galinsky, V. L.; Shevchenko, V. I.

    2011-06-20

    Acceleration at the shock waves propagating oblique to the magnetic field is studied using a recently developed theoretical/numerical model. The model assumes that resonant hydromagnetic wave-particle interaction is the most important physical mechanism relevant to motion and acceleration of particles as well as to excitation and damping of waves. The treatment of plasma and waves is self-consistent and time dependent. The model uses conservation laws and resonance conditions to find where waves will be generated or damped, and hence particles will be pitch-angle-scattered. The total distribution is included in the model and neither introduction of separate population of seed particles nor some ad hoc escape rate of accelerated particles is needed. Results of the study show agreement with diffusive shock acceleration models in the prediction of power spectra for accelerated particles in the upstream region. However, they also reveal the presence of spectral break in the high-energy part of the spectra. The role of the second-order Fermi-like acceleration at the initial stage of the acceleration is discussed. The test case used in the paper is based on ISEE-3 data collected for the shock of 1978 November 12.

  17. Mechanical Properties of Shock-Damaged Rocks

    NASA Technical Reports Server (NTRS)

    He, Hongliang; Ahrens, T. J.

    1994-01-01

    Stress-strain tests were performed both on shock-damaged gabbro and limestone. The effective Young's modulus decreases with increasing initial damage parameter value, and an apparent work-softening process occurs prior to failure. To further characterize shock-induced microcracks, the longitudinal elastic wave velocity behavior of shock-damaged gabbro in the direction of compression up to failure was measured using an acoustic transmission technique under uniaxial loading. A dramatic increase in velocity was observed for the static compressive stress range of 0-50 MPa. Above that stress range, the velocity behavior of lightly damaged (D(sub 0) less than 0.1) gabbro is almost equal to unshocked gabbro. The failure strength of heavily-damaged (D(sub 0) greater than 0.1) gabbro is approx. 100-150 MPa, much lower than that of lightly damaged and unshocked gabbros (approx. 230-260 MPa). Following Nur's theory, the crack shape distribution was analyzed. The shock-induced cracks in gabbro appear to be largely thin penny-shaped cracks with c/a values below 5 x 10(exp -4). Moreover, the applicability of Ashby and Sammis's theory relating failure strength and damage parameter of shock-damaged rocks was examined and was found to yield a good estimate of the relation of shock-induced deficit in elastic modulus with the deficit in compressive strength.

  18. Collisionless shock waves mediated by Weibel Instability

    NASA Astrophysics Data System (ADS)

    Naseri, Neda; Ruan, Panpan; Zhang, Xi; Khudik, Vladimir; Shvets, Gennady

    2015-11-01

    Relativistic collisionless shocks are common events in astrophysical environments. They are thought to be responsible for generating ultra-high energy particles via the Fermi acceleration mechanism. It has been conjectured that the formation of collisionless shocks is mediated by the Weibel instability that takes place when two initially cold, unmagnetized plasma shells counter-propagate into each other with relativistic drift velocities. Using a PIC code, VLPL, which is modified to suppress numerical Cherenkov instabilities, we study the shock formation and evolution for asymmetric colliding shells with different densities in their own proper reference frame. Plasma instabilities in the region between the shock and the precursor are also investigated using a moving-window simulation that advances the computational domain at the shock's speed. This method helps both to save computation time and avoid severe numerical Cherenkov instabilities, and it allows us to study the shock evolution in a longer time period. Project is supported by US DOE grants DE-FG02-04ER41321 and DE-FG02-07ER54945.

  19. The Interaction of a Reflected Shock Wave with the Boundary Layer in a Shock Tube

    NASA Technical Reports Server (NTRS)

    Mark, Herman

    1958-01-01

    Ideally, the reflection of a shock from the closed end of a shock tube provides, for laboratory study, a quantity of stationary gas at extremely high temperature. Because of the action of viscosity, however, the flow in the real case is not one-dimensional, and a boundary layer grows in the fluid following the initial shock wave. In this paper simplifying assumptions are made to allow an analysis of the interaction of the shock reflected from the closed end with the boundary layer of the initial shock afterflow. The analysis predicts that interactions of several different types will exist in different ranges of initial shock Mach number. It is shown that the cooling effect of the wall on the afterflow boundary layer accounts for the change in interaction type. An experiment is carried out which verifies the existence of the several interaction regions and shows that they are satisfactorily predicted by the theory. Along with these results, sufficient information is obtained from the experiments to make possible a model for the interaction in the most complicated case. This model is further verified by measurements made during the experiment. The case of interaction with a turbulent boundary layer is also considered. Identifying the type of interaction with the state of turbulence of the interacting boundary layer allows for an estimate of the state of turbulence of the boundary layer based on an experimental investigation of the type of interaction. A method is proposed whereby the effect of the boundary-layer interaction on the strength of the reflected shock may be calculated. The calculation indicates that the reflected shock is rapidly attenuated for a short distance after reflection, and this result compares favorably with available experimental results.

  20. Probabilistic Grammars for Natural Languages. Psychology Series.

    ERIC Educational Resources Information Center

    Suppes, Patrick

    The purpose of this paper is to define the framework within which empirical investigations of probabilistic grammars can take place and to sketch how this attack can be made. The full presentation of empirical results will be left to other papers. In the detailed empirical work, the author has depended on the collaboration of E. Gammon and A.…

  1. Probabilistic analysis of a materially nonlinear structure

    NASA Technical Reports Server (NTRS)

    Millwater, H. R.; Wu, Y.-T.; Fossum, A. F.

    1990-01-01

    A probabilistic finite element program is used to perform probabilistic analysis of a materially nonlinear structure. The program used in this study is NESSUS (Numerical Evaluation of Stochastic Structure Under Stress), under development at Southwest Research Institute. The cumulative distribution function (CDF) of the radial stress of a thick-walled cylinder under internal pressure is computed and compared with the analytical solution. In addition, sensitivity factors showing the relative importance of the input random variables are calculated. Significant plasticity is present in this problem and has a pronounced effect on the probabilistic results. The random input variables are the material yield stress and internal pressure with Weibull and normal distributions, respectively. The results verify the ability of NESSUS to compute the CDF and sensitivity factors of a materially nonlinear structure. In addition, the ability of the Advanced Mean Value (AMV) procedure to assess the probabilistic behavior of structures which exhibit a highly nonlinear response is shown. Thus, the AMV procedure can be applied with confidence to other structures which exhibit nonlinear behavior.

  2. A probabilistic approach to composite micromechanics

    NASA Technical Reports Server (NTRS)

    Stock, T. A.; Bellini, P. X.; Murthy, P. L. N.; Chamis, C. C.

    1988-01-01

    Probabilistic composite micromechanics methods are developed that simulate expected uncertainties in unidirectional fiber composite properties. These methods are in the form of computational procedures using Monte Carlo simulation. A graphite/epoxy unidirectional composite (ply) is studied to demonstrate fiber composite material properties at the micro level. Regression results are presented to show the relative correlation between predicted and response variables in the study.

  3. Balkanization and Unification of Probabilistic Inferences

    ERIC Educational Resources Information Center

    Yu, Chong-Ho

    2005-01-01

    Many research-related classes in social sciences present probability as a unified approach based upon mathematical axioms, but neglect the diversity of various probability theories and their associated philosophical assumptions. Although currently the dominant statistical and probabilistic approach is the Fisherian tradition, the use of Fisherian…

  4. Dynamic Probabilistic Instability of Composite Structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2009-01-01

    A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties in that order.

  5. The Probabilistic Nature of Preferential Choice

    ERIC Educational Resources Information Center

    Rieskamp, Jorg

    2008-01-01

    Previous research has developed a variety of theories explaining when and why people's decisions under risk deviate from the standard economic view of expected utility maximization. These theories are limited in their predictive accuracy in that they do not explain the probabilistic nature of preferential choice, that is, why an individual makes…

  6. Pigeons' Discounting of Probabilistic and Delayed Reinforcers

    ERIC Educational Resources Information Center

    Green, Leonard; Myerson, Joel; Calvert, Amanda L.

    2010-01-01

    Pigeons' discounting of probabilistic and delayed food reinforcers was studied using adjusting-amount procedures. In the probability discounting conditions, pigeons chose between an adjusting number of food pellets contingent on a single key peck and a larger, fixed number of pellets contingent on completion of a variable-ratio schedule. In the…

  7. Probabilistic Relational Structures and Their Applications

    ERIC Educational Resources Information Center

    Domotor, Zoltan

    The principal objects of the investigation reported were, first, to study qualitative probability relations on Boolean algebras, and secondly, to describe applications in the theories of probability logic, information, automata, and probabilistic measurement. The main contribution of this work is stated in 10 definitions and 20 theorems. The basic…

  8. Experimental Validation of Detonation Shock Dynamics in Condensed Explosives

    NASA Astrophysics Data System (ADS)

    Stewart, D. Scott; Lambert, David E.; Yoo, Sunhee; Wescott, Bradley L.

    2005-07-01

    Experiments in the HMX-based, condensed explosive PBX-9501 were carried out to validate a reduced, asymptotically derived description of detonation shock dynamics (DSD) where it is assumed that the normal detonation shock speed is determined by the total shock curvature. The passover experiment has a lead disk embedded in a right circular cylindrical charge of PBX-9501 and is initiated from the bottom. A range of dynamic detonation states with both diverging (convex) and converging (concave) shock shapes are realized as the detonation shock passes over the disk. The time of arrival of the detonation shock at the top surface of the charge is recorded and compared against DSD simulation and direct multi-material simulation. A new wide-ranging equation of state (EOS) and rate law is used to describe the explosive and is employed in both theory and multi-material simulation. The experiment and theory and simulation are found to be in excellent agreement.

  9. Shock Formation in Electron-Ion Plasmas: Mechanism and Timing

    NASA Astrophysics Data System (ADS)

    Bret, Antoine; Stockem Novo, Anne; Ricardo, Fonseca; Luis, Silva

    2016-10-01

    We analyze the formation of a collisionless shock in electron-ion plasmas in theory and simulations. In initially un-magnetized relativistic plasmas, such shocks are triggered by the Weibel instability. While in pair plasmas the shock starts forming right after the instability saturates, it is not so in electron-ion plasmas because the Weibel filaments at saturation are too small. An additional merging phase is therefore necessary for them to efficiently stop the flow. We derive a theoretical model for the shock formation time, taking into account filament merging in the nonlinear phase of the Weibel instability. This process is much slower than in electron-positron pair shocks, and so the shock formation is longer by a factor proportional to √{mi /me } ln(mi /me).

  10. Reactive shock phenomena in condensed materials: Formulation of the problem and method of solution

    NASA Astrophysics Data System (ADS)

    Guirguis, R.; Oran, E. S.

    1983-12-01

    A reactive shock simulation model used to study the formation and propagation of shocks and detonations in condensed phase materials is described. Two test cases are given: (1) laser initiation of a shock wave propagating through water, and (2) the development of a detonation front from a hot spot in liquid nitromethane.

  11. What Is Cardiogenic Shock?

    MedlinePlus

    ... think that you or someone else is in shock, call 9–1–1 right away for emergency treatment. Prompt medical care can save your life and ... half of the people who go into cardiogenic shock survive. This is because of ... improved treatments, such as medicines and devices. These treatments can ...

  12. Normal Shock Vortex Interaction

    DTIC Science & Technology

    2003-03-01

    Figure 9: Breakdown map for normal-shock vortex-interaction. References [1] O. Thomer, W. Schroder and M. Meinke , Numerical Simulation of Normal...and Oblique-Shock Vortex Interaction, ZAMM Band 80, Sub. 1, pp. 181-184, 2000. [2] O. Thomer, E. Krause, W. Schroder and M. Meinke , Computational

  13. Development of Ultra Small Shock Tube for High Energy Molecular Beam Source

    SciTech Connect

    Miyoshi, Nobuya; Nagata, Shuhei; Kinefuchi, Ikuya; Shimizu, Kazuya; Matsumoto, Yoichiro; Takagi, Shu

    2008-12-31

    A molecular beam source exploiting a small shock tube is described for potential generation of high energy beam in a range of 1-5 eV without any undesirable impurities. The performance of a non-diaphragm type shock tube with an inner diameter of 2 mm was evaluated by measuring the acceleration and attenuation process of shock waves. With this shock tube installed in a molecular beam source, we measured the time-of-flight distributions of shock-heated beams, which demonstrated the ability of controlling the beam energy with the initial pressure ratio of the shock tube.

  14. Probabilistic Aeroelastic Analysis Developed for Turbomachinery Components

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Mital, Subodh K.; Stefko, George L.; Pai, Shantaram S.

    2003-01-01

    Aeroelastic analyses for advanced turbomachines are being developed for use at the NASA Glenn Research Center and industry. However, these analyses at present are used for turbomachinery design with uncertainties accounted for by using safety factors. This approach may lead to overly conservative designs, thereby reducing the potential of designing higher efficiency engines. An integration of the deterministic aeroelastic analysis methods with probabilistic analysis methods offers the potential to design efficient engines with fewer aeroelastic problems and to make a quantum leap toward designing safe reliable engines. In this research, probabilistic analysis is integrated with aeroelastic analysis: (1) to determine the parameters that most affect the aeroelastic characteristics (forced response and stability) of a turbomachine component such as a fan, compressor, or turbine and (2) to give the acceptable standard deviation on the design parameters for an aeroelastically stable system. The approach taken is to combine the aeroelastic analysis of the MISER (MIStuned Engine Response) code with the FPI (fast probability integration) code. The role of MISER is to provide the functional relationships that tie the structural and aerodynamic parameters (the primitive variables) to the forced response amplitudes and stability eigenvalues (the response properties). The role of FPI is to perform probabilistic analyses by utilizing the response properties generated by MISER. The results are a probability density function for the response properties. The probabilistic sensitivities of the response variables to uncertainty in primitive variables are obtained as a byproduct of the FPI technique. The combined analysis of aeroelastic and probabilistic analysis is applied to a 12-bladed cascade vibrating in bending and torsion. Out of the total 11 design parameters, 6 are considered as having probabilistic variation. The six parameters are space-to-chord ratio (SBYC), stagger angle

  15. Underwater Shock Wave Research Applied to Therapeutic Device Developments

    NASA Astrophysics Data System (ADS)

    Takayama, K.; Yamamoto, H.; Shimokawa, H.

    2013-07-01

    The chronological development of underwater shock wave research performed at the Shock Wave Research Center of the Institute of Fluid Science at the Tohoku University is presented. Firstly, the generation of planar underwater shock waves in shock tubes and their visualization by using the conventional shadowgraph and schlieren methods are described. Secondly, the generation of spherical underwater shock waves by exploding lead azide pellets weighing from several tens of micrograms to 100 mg, that were ignited by irradiating with a Q-switched laser beam, and their visualization by using double exposure holographic interferometry are presented. The initiation, propagation, reflection, focusing of underwater shock waves, and their interaction with various interfaces, in particular, with air bubbles, are visualized quantitatively. Based on such a fundamental underwater shock wave research, collaboration with the School of Medicine at the Tohoku University was started for developing a shock wave assisted therapeutic device, which was named an extracorporeal shock wave lithotripter (ESWL). Miniature shock waves created by irradiation with Q-switched HO:YAG laser beams are studied, as applied to damaged dysfunctional nerve cells in the myocardium in a precisely controlled manner, and are effectively used to design a catheter for treating arrhythmia.

  16. Diagnosis and management of shock in the emergency department.

    PubMed

    Richards, Jeremy B; Wilcox, Susan R

    2014-03-01

    Shock is a state of acute circulatory failure leading to decreased organ perfusion, with inadequate delivery of oxygenated blood to tissues and resultant end-organ dysfunction. The mechanisms that can result in shock are divided into 4 categories: (1) hypovolemic, (2) distributive, (3) cardiogenic, and (4) obstructive. While much is known regarding treatment of patients in shock, several controversies continue in the literature. Assessment begins with identifying the need for critical interventions such as intubation, mechanical ventilation, or obtaining vascular access. Prompt workup should be initiated with laboratory testing (especially of serum lactate levels) and imaging, as indicated. Determining the intravascular volume status of patients in shock is critical and aids in categorizing and informing treatment decisions. This issue reviews the 4 primary categories of shock as well as special categories, including shock in pregnancy, traumatic shock, septic shock, and cardiogenic shock in myocardial infarction. Adherence to evidence-based care of the specific causes of shock can optimize a patient's chances of surviving this life-threatening condition.

  17. Pathophysiology of shock.

    PubMed

    Houston, M C

    1990-06-01

    Shock is an acute widespread reduction in effective tissue perfusion that invokes an imbalance of oxygen supply and demand, anaerobic metabolism, lactic acidosis, cellular and organ dysfunction, metabolic abnormalities, and, if prolonged, irreversible damage and death. The pathophysiologic events in the various types of shock are different and complex with hemodynamic and oxygenation changes, alterations in the composition of the fluid compartments, and various mediators. Shock results from a change in one or a combination of the following: intravascular volume, myocardial function, systemic vascular resistance, or distribution of blood flow. The clinical types of shock include hypovolemic, cardiogenic, distributive (septic), and obstructive. An understanding of the pathophysiologic changes, rapid diagnosis, appropriate monitoring, and appropriate therapy can reduce the high morbidity and mortality in shock states.

  18. Hydrodynamic simulations of gaseous Argon shock compression experiments

    NASA Astrophysics Data System (ADS)

    Garcia, Daniel B.; Dattelbaum, Dana M.; Goodwin, Peter M.; Sheffield, Stephen A.; Morris, John S.; Gustavsen, Richard L.; Burkett, Michael W.

    2017-01-01

    The lack of published Ar gas shock data motivated an evaluation of the Ar Equation of State (EOS) in gas phase initial density regimes. In particular, these regimes include initial pressures in the range of 13.8 - 34.5 bar (0.025 - 0.056 g/ cm3) and initial shock velocities around 0.2 cm/μs. The objective of the numerical evaluation was to develop a physical understanding of the EOS behavior of shocked and subsequently multiply re-shocked Ar gas through Pagosa numerical simulations utilizing the SESAME equation of state. Pagosa is a Los Alamos National Laboratory 2-D and 3-D Eulerian continuum dynamics code capable of modeling high velocity compressible flow with multiple materials. The approach involved the use of gas gun experiments to evaluate the shock and multiple re-shock behavior of pressurized Ar gas to validate Pagosa simulations and the SESAME EOS. Additionally, the diagnostic capability within the experiments allowed for the EOS to be fully constrained with measured shock velocity, particle velocity and temperature. The simulations demonstrate excellent agreement with the experiments in the shock velocity/particle velocity space, and reasonable comparisons for the ionization temperatures.

  19. Reflection of curved shock waves

    NASA Astrophysics Data System (ADS)

    Mölder, S.

    2017-03-01

    Shock curvatures are related to pressure gradients, streamline curvatures and vorticity in flows with planar and axial symmetry. Explicit expressions, in an influence coefficient format, are used to relate post-shock pressure gradient, streamline curvature and vorticity to pre-shock gradients and shock curvature in steady flow. Using higher order, von Neumann-type, compatibility conditions, curved shock theory is applied to calculate the flow near singly and doubly curved shocks on curved surfaces, in regular shock reflection and in Mach reflection. Theoretical curved shock shapes are in good agreement with computational fluid dynamics calculations and experiment.

  20. Traumatic hemorrhagic shock: advances in fluid management.

    PubMed

    Cherkas, David

    2011-11-01

    A number of concerns have been raised regarding the advisability of the classic principles of aggressive crystalloid resuscitation in traumatic hemorrhagic shock. This issue reviews the advances that have led to a shift in the emergency department (ED) protocols in resuscitation from shock state, including recent literature regarding the new paradigm for the treatment of traumatic hemorrhagic shock, which is most generally known as damage control resuscitation (DCR). Goals and endpoints for resuscitation and a review of initial fluid choice are discussed, along with the coagulopathy of trauma and its management, how to address hemorrhagic shock in traumatic brain injury (TBI), and new pharmacologic treatment for hemorrhagic shock. The primary conclusions include the administration of tranexamic acid (TXA) for all patients with uncontrolled hemorrhage (Class I), the implementation of a massive transfusion protocol (MTP) with fixed blood product ratios (Class II), avoidance of large-volume crystalloid resuscitation (Class III), and appropriate usage of permissive hypotension (Class III). The choice of fluid for initial resuscitation has not been shown to affect outcomes in trauma (Class I).

  1. Pathophysiological roles of peroxynitrite in circulatory shock.

    PubMed

    Szabó, Csaba; Módis, Katalin

    2010-09-01

    Peroxynitrite is a reactive oxidant produced from nitric oxide and superoxide, which reacts with proteins, lipids, and DNA, and promotes cytotoxic and proinflammatory responses. Here, we overview the role of peroxynitrite in various forms of circulatory shock. Immunohistochemical and biochemical evidences demonstrate the production of peroxynitrite in various experimental models of endotoxic and hemorrhagic shock both in rodents and in large animals. In addition, biological markers of peroxynitrite have been identified in human tissues after circulatory shock. Peroxynitrite can initiate toxic oxidative reactions in vitro and in vivo. Initiation of lipid peroxidation, direct inhibition of mitochondrial respiratory chain enzymes, inactivation of glyceraldehyde-3-phosphate dehydrogenase, inhibition of membrane Na+/K+ ATPase activity, inactivation of membrane sodium channels, and other oxidative protein modifications contribute to the cytotoxic effect of peroxynitrite. In addition, peroxynitrite is a potent trigger of DNA strand breakage, with subsequent activation of the nuclear enzyme poly(ADP-ribose) polymerase, which promotes cellular energetic collapse and cellular necrosis. Additional actions of peroxynitrite that contribute to the pathogenesis of shock include inactivation of catecholamines and catecholamine receptors (leading to vascular failure) and endothelial and epithelial injury (leading to endothelial and epithelial hyperpermeability and barrier dysfunction), as well as myocyte injury (contributing to loss of cardiac contractile function). Neutralization of peroxynitrite with potent peroxynitrite decomposition catalysts provides cytoprotective and beneficial effects in rodent and large-animal models of circulatory shock.

  2. Analysis of compaction shock interactions during DDT of low density HMX

    NASA Astrophysics Data System (ADS)

    Rao, Pratap T.; Gonthier, Keith A.

    2017-01-01

    Deflagration-to-Detonation Transition (DDT) in confined, low density granular HMX occurs by a complex mechanism that involves compaction shock interactions within the material. Piston driven DDT experiments indicate that detonation is abruptly triggered by the interaction of a strong combustion-supported secondary shock and a piston-supported primary (input) shock, where the nature of the interaction depends on initial packing density and primary shock strength. These interactions influence transition by affecting dissipative heating within the microstructure during pore collapse. Inert meso-scale simulations of successive shock loading of low density HMX are performed to examine how dissipation and hot-spot formation are affected by the initial density, and the primary and secondary shock strengths. This information is used to formulate an ignition and burn model for low density HMX that accounts for the effect of shock densensitization on burn. Preliminary DDT predictions are presented that illustrate how primary shock strength affects the transition mechanism.

  3. Shock-induced CO2 loss from CaCO3: Implications for early planetary atmospheres

    NASA Technical Reports Server (NTRS)

    Lange, M. A.; Ahrens, T. J.

    1984-01-01

    Recovered samples from shock recovery experiments on single crystal calcite were subjected to thermogravimetric analysis to determine the amount of post-shock CO2, the decarbonization interval and the activation energy, for the removal of remaining CO2 in shock-loaded calcite. Comparison of post-shock CO2 with that initially present determines shock-induced CO2 loss as a function of shock pressure. Incipient to complete CO2 loss occurs over a pressure range of approximately 10 to approximately 70 GPa. Optical and scanning electron microscopy reveal structural changes, which are related to the shock-loading. The occurrence of dark, diffuse areas, which can be resolved as highly vesicular areas as observed with a scanning electron microscope are interpreted as representing quenched partial melts, into which shock-released CO2 was injected. The experimental results are used to constrain models of shock-produced, primary CO2 atmospheres on the accreting terrestrial planets.

  4. Absorption spectra of shocked liquid CS/sub 2/

    SciTech Connect

    Dallman, J.C.

    1985-01-01

    The importance of shock initiation of high explosives (HE) was understood as early as 1863 when Alfred Nobel introduced the detonator as a means of detonating nitroglycerine. The critical pressure rise times required to achieve shock initiation and steady propagation of detonation are determined by the chemical and mechanical properties of an explosive. Although progress has been made in the understanding of the effects of mechanical properties, the detailed effects of high pressures on chemical reaction mechanisms are still only poorly understood. This paper reports the results of two experiments using CS/sub 2/, which is known to undergo electronic state transitions when shocked to high pressures. The goal of these experiments was to examine the known shock-generated expansion of CS/sub 2/ absorption bands while generating the shocks with a flyer plate system driven by high explosives.

  5. Comparison of shock severity measures

    SciTech Connect

    Baca, T.J.

    1989-01-01

    In an effort to clarify the issues associated with quantifying shock severity, this paper compares the merits of two measures of shock severity. The first measure is the widely used absolute acceleration shock response spectrum (SAA). The second measure of shock severity is relatively new and is known as the shock intensity spectrum (SIS). Overall information content of SAA and SIS spectra are compared and discussed in the context of two shock excitations having known amplitude, duration, and frequency content. The first is a burst of band-limited white noise and the second is a classical haversine pulse. After describing both the SAA and SIS shock measures, numerous examples are described which emphasize the strengths and limitations of each shock characterization method. This discussion reveals how the use of different shock measures may alter an engineer's conclusions about relative shock severity between two shock environments. 8 refs., 15 figs.

  6. Synchronization for the Realization-Dependent Probabilistic Boolean Networks.

    PubMed

    Chen, Hongwei; Liang, Jinling; Lu, Jianquan; Qiu, Jianlong

    2017-01-24

    This paper investigates the synchronization problem for the realization-dependent probabilistic Boolean networks (PBNs) coupled unidirectionally in the drive-response configuration. The realization of the response PBN is assumed to be uniquely determined by the realization signal generated by the drive PBN at each discrete time instant. First, the drive-response PBNs are expressed in their algebraic forms based on the semitensor product method, and then, a necessary and sufficient condition is presented for the synchronization of the PBNs. Second, by resorting to a newly defined matrix operator, the reachable set from any initial state is expressed by a column vector. Consequently, an easily computable algebraic criterion is derived assuring the synchronization of the drive-response PBNs. Finally, three illustrative examples are employed to demonstrate the applicability and usefulness of the developed theoretical results.

  7. Probabilistic assessment of roadway departure risk in a curve

    NASA Astrophysics Data System (ADS)

    Rey, G.; Clair, D.; Fogli, M.; Bernardin, F.

    2011-10-01

    Roadway departure while cornering constitutes a major part of car accidents and casualties in France. Even though drastic policy about overspeeding contributes to reduce accidents, there obviously exist other factors. This article presents the construction of a probabilistic strategy for the roadway departure risk assessment. A specific vehicle dynamic model is developed in which some parameters are modelled by random variables. These parameters are deduced from a sensitivity analysis to ensure an efficient representation of the inherent uncertainties of the system. Then, structural reliability methods are employed to assess the roadway departure risk in function of the initial conditions measured at the entrance of the curve. This study is conducted within the French national road safety project SARI that aims to implement a warning systems alerting the driver in case of dangerous situation.

  8. LANL'S DEVELOPMENT OF SCHEDULE CONTINGENCY BASED ON PROBABILISTIC RISK RESULTS

    SciTech Connect

    F. K. HOUGHTON; J. P. KINDINGER; ET AL

    2001-03-01

    The need for budget reserve in project planning is widely recognized, but the need for a corresponding schedule reserve is generally not as universally acknowledged. Los Alamos National Laboratory (LANL) performs probabilistic project risk analyses for all major projects; the question has become how to more fully use the results of the schedule analyses to establish a schedule reserve. Before the use of quantitative project risk analysis, milestones were based on the initial point estimate with float time as the only time reserved for delays. With this practice, the set of tasks that have duration reserve is limited. In particular, critical path tasks, which do not have associated float, do not have reserve schedule time to use for delays. For other tasks, float was used on a ''first-come, first-served'' basis without allocation to other tasks. Thus, it was not possible to systematically allocate schedule reserve to project tasks.

  9. Probabilistic finite elements for fatigue and fracture analysis

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Liu, Wing Kam

    1993-01-01

    An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.

  10. Reusable solid rocket motor case - Optimum probabilistic fracture control

    NASA Technical Reports Server (NTRS)

    Hanagud, S.; Uppaluri, B.

    1979-01-01

    A methodology for the reliability analysis of a reusable solid rocket motor case is discussed in this paper. The analysis is based on probabilistic fracture mechanics and probability distribution for initial flaw sizes. The developed reliability analysis can be used to select the structural design variables of the solid rocket motor case on the basis of minimum expected cost and specified reliability bounds during the projected design life of the case. Effects on failure prevention plans such as nondestructive inspection and the material erosion between missions can also be considered in the developed procedure for selection of design variables. The reliability-based procedure that has been discussed in this paper can easily be modified to consider other similar structures of reusable space vehicle systems with different fracture control plans.

  11. A probabilistic level set formulation for interactive organ segmentation

    NASA Astrophysics Data System (ADS)

    Cremers, Daniel; Fluck, Oliver; Rousson, Mikael; Aharon, Shmuel

    2007-03-01

    Level set methods have become increasingly popular as a framework for image segmentation. Yet when used as a generic segmentation tool, they suffer from an important drawback: Current formulations do not allow much user interaction. Upon initialization, boundaries propagate to the final segmentation without the user being able to guide or correct the segmentation. In the present work, we address this limitation by proposing a probabilistic framework for image segmentation which integrates input intensity information and user interaction on equal footings. The resulting algorithm determines the most likely segmentation given the input image and the user input. In order to allow a user interaction in real-time during the segmentation, the algorithm is implemented on a graphics card and in a narrow band formulation.

  12. A nondeterministic shock and vibration application using polynomial chaos expansions

    SciTech Connect

    FIELD JR.,RICHARD V.; RED-HORSE,JOHN R.; PAEZ,THOMAS L.

    2000-03-28

    In the current study, the generality of the key underpinnings of the Stochastic Finite Element (SFEM) method is exploited in a nonlinear shock and vibration application where parametric uncertainty enters through random variables with probabilistic descriptions assumed to be known. The system output is represented as a vector containing Shock Response Spectrum (SRS) data at a predetermined number of frequency points. In contrast to many reliability-based methods, the goal of the current approach is to provide a means to address more general (vector) output entities, to provide this output as a random process, and to assess characteristics of the response which allow one to avoid issues of statistical dependence among its vector components.

  13. Shock Bench Enhancements

    NASA Astrophysics Data System (ADS)

    Charvet, B.; Dilhan, D.; Palladino, M.

    2014-06-01

    In 2008 a contract placed by CNES in partnership with ESA has led MECANO ID to develop a shock bench to qualify spacecraft equipment. A spacecraft shall withstand several shocks without degradation: launcher fairing or stages separation, spacecraft separation, the release of appendage (solar arrays, antenna reflectors, booms) and shocks generated when the pyrovalves of the propulsion system are fired.The Shock Response Spectrum (SRS) requirement, to be applied to the equipment, depends on its mass, its size and its location in the satellite. CNES has performed a survey of the pyroshock qualification requirements on CNES and ESA satellites. The outcome of the activity was the input for the bench development (Fig. 1). The design and sizing of the pyroshock bench started with non linear shock analysis with the help of the Dytran software.A lot of solutions have been compared: mono-plate, bi- plate, Hopkinson bar. The bi-plate was chosen thanks to its very rich frequency content. Also, the shock can be generated on one plate with the equipment mounted on the other, to avoid the direct transmission of the shock to the equipment basis.This study led to a 1000 mm x 650 mm steel bi-plate with a 300 mm aluminum cube fitted on one side. The equipment to test is mounted on the cube (Fig. 2 & 3).

  14. Shock-to-Detonation Transition simulations

    SciTech Connect

    Menikoff, Ralph

    2015-07-14

    Shock-to-detonation transition (SDT) experiments with embedded velocity gauges provide data that can be used for both calibration and validation of high explosive (HE) burn models. Typically, a series of experiments is performed for each HE in which the initial shock pressure is varied. Here we describe a methodology for automating a series of SDT simulations and comparing numerical tracer particle velocities with the experimental gauge data. Illustrative examples are shown for PBX 9502 using the HE models implemented in the xRage ASC code at LANL.

  15. RNA-PAIRS: RNA probabilistic assignment of imino resonance shifts

    PubMed Central

    Bahrami, Arash; Clos, Lawrence J.; Markley, John L.; Butcher, Samuel E.

    2012-01-01

    The significant biological role of RNA has further highlighted the need for improving the accuracy, efficiency and the reach of methods for investigating RNA structure and function. Nuclear magnetic resonance (NMR) spectroscopy is vital to furthering the goals of RNA structural biology because of its distinctive capabilities. However, the dispersion pattern in the NMR spectra of RNA makes automated resonance assignment, a key step in NMR investigation of biomolecules, remarkably challenging. Herein we present RNA Probabilistic Assignment of Imino Resonance Shifts (RNA-PAIRS), a method for the automated assignment of RNA imino resonances with synchronized verification and correction of predicted secondary structure. RNA-PAIRS represents an advance in modeling the assignment paradigm because it seeds the probabilistic network for assignment with experimental NMR data, and predicted RNA secondary structure, simultaneously and from the start. Subsequently, RNA-PAIRS sets in motion a dynamic network that reverberates between predictions and experimental evidence in order to reconcile and rectify resonance assignments and secondary structure information. The procedure is halted when assignments and base-parings are deemed to be most consistent with observed crosspeaks. The current implementation of RNA-PAIRS uses an initial peak list derived from proton-nitrogen heteronuclear multiple quantum correlation (1H–15N 2D HMQC) and proton–proton nuclear Overhauser enhancement spectroscopy (1H–1H 2D NOESY) experiments. We have evaluated the performance of RNA-PAIRS by using it to analyze NMR datasets from 26 previously studied RNAs, including a 111-nucleotide complex. For moderately sized RNA molecules, and over a range of comparatively complex structural motifs, the average assignment accuracy exceeds 90%, while the average base pair prediction accuracy exceeded 93%. RNA-PAIRS yielded accurate assignments and base pairings consistent with imino resonances for a majority

  16. Shock interactions with magnetized interstellar clouds. 1: Steady shocks hitting nonradiative clouds

    NASA Technical Reports Server (NTRS)

    Low, Mordecai-Mark Mac; Mckee, Christopher F.; Klein, Richard I.; Stone, James M.; Norman, Michael L.

    1994-01-01

    We study the interaction of a steady, planar shock with a nonradiative, spherical, interstellar cloud threaded by a uniform magnetic field. For strong shocks, the sonic Mach number scales out, so two parameters determine the evolution: the ratio of cloud to intercloud density, and the Alfven Mach number. We focus on the case with initial field parallel to the shock velocity, though we also present one model with field perpendicular to the velocity. Even with 100 zones per cloud radius, we find that the magnetic field structure converges only at early times. However, we can draw three conclusions from our work. First, our results suggest that the inclusion of a field in equipartition with the preshock medium can prevent the complete destruction of the cloud found in the field-free case recently considered by Klein, McKee, & Colella. Second, the interaction of the shock with the cloud can amplify the magnetic field in some regions up to equipartition with the post-shock thermal pressure. In the parallel-field case, the shock preferentially amplifies the parallel component of the field, creating a 'flux rope,' a linear structure of concentrated magnetic field. The flux rope dominates the volume of amplified field, so that laminar, rather than turbulent, amplification is dominant in this case. Third, the presence of the cloud enhances the production of X-ray and synchrotron emission. The X-ray emission peaks early, during the initial passage of the shock over the cloud, while the synchrotron emission peaks later, when the flow sweeps magnetic field onto the axis between the cloud and the main shock.

  17. Possibility of short-term probabilistic forecasts for large earthquakes making good use of the limitations of existing catalogs

    NASA Astrophysics Data System (ADS)

    Hirata, Yoshito; Iwayama, Koji; Aihara, Kazuyuki

    2016-10-01

    Earthquakes are quite hard to predict. One of the possible reasons can be the fact that the existing catalogs of past earthquakes are limited at most to the order of 100 years, while their characteristic time scale is sometimes greater than that time span. Here we rather use these limitations positively and characterize some large earthquake events as abnormal events that are not included there. When we constructed probabilistic forecasts for large earthquakes in Japan based on similarity and difference to their past patterns—which we call known and unknown abnormalities, respectively—our forecast achieved probabilistic gains of 5.7 and 2.4 against a time-independent model for main shocks with the magnitudes of 7 or above. Moreover, the two abnormal conditions covered 70% of days whose maximum magnitude was 7 or above.

  18. Shock formation of HCO/+/

    NASA Astrophysics Data System (ADS)

    Elitzur, M.

    1983-04-01

    It is shown that shocks propagating in dense molecular regions will lead to a decrease in HCO(+) relative abundance, in agreement with previous results by Iglesias and Silk (1978). The shock enhancement of HCO(+) detected in the supernova remnant IC 443 by Dickinson et al. (1980) is due to enhanced ionization in the shocked material. This is the result of the material penetrating the remnant cavity where it becomes exposed to the trapped cosmic rays. A similar enhancement appears to have been detected by Wootten in W28 and is explained by the same model.

  19. Sepsis and septic shock.

    PubMed

    Maloney, Patrick J

    2013-08-01

    Early recognition of sepsis and septic shock in children relies on obtaining an attentive clinical history, accurate vital signs, and a physical examination focused on mental status, work of breathing, and circulatory status. Laboratory tests may support the diagnosis but are not reliable in isolation. The goal of septic shock management is reversal of tissue hypoperfusion. The therapeutic end point is shock reversal. Mortality is significantly better among children when managed appropriately. Every physician who cares for children must strive to have a high level of suspicion and keen clinical acumen for recognizing the rare but potentially seriously ill child.

  20. Shock effects in meteorites

    NASA Technical Reports Server (NTRS)

    Stoeffler, D.; Bischoff, A.; Buchwald, V.; Rubin, A. E.

    1988-01-01

    The impacts that can occur between objects on intersecting solar system orbits can generate shock-induced deformations and transformations, creating new mineral phases or melting old ones. These shock-metamorphic effects affect not only the petrography but the chemical and isotopic properties and the ages of primordial meteoritic materials. A fuller understanding of shock metamorphism and breccia formation in meteorites will be essential not only in the study of early accretion, differentiation, and regolith-evolution processes, but in the characterization of the primordial composition of the accreted material itself.

  1. Shocks near Jamming

    NASA Astrophysics Data System (ADS)

    Gómez, Leopoldo R.; Turner, Ari M.; van Hecke, Martin; Vitelli, Vincenzo

    2012-02-01

    Nonlinear sound is an extreme phenomenon typically observed in solids after violent explosions. But granular media are different. Right when they jam, these fragile and disordered solids exhibit a vanishing rigidity and sound speed, so that even tiny mechanical perturbations form supersonic shocks. Here, we perform simulations in which two-dimensional jammed granular packings are dynamically compressed and demonstrate that the elementary excitations are strongly nonlinear shocks, rather than ordinary phonons. We capture the full dependence of the shock speed on pressure and impact intensity by a surprisingly simple analytical model.

  2. Justifying the Gompertz curve of mortality via the generalized Polya process of shocks.

    PubMed

    Cha, Ji Hwan; Finkelstein, Maxim

    2016-06-01

    A new probabilistic model of aging that can be applied to organisms is suggested and analyzed. Organisms are subject to shocks that follow the generalized Polya process (GPP), which has been recently introduced and characterized in the literature. Distinct from the nonhomogeneous Poisson process that has been widely used in applications, the important feature of this process is the dependence of its future behavior on the number of previous events (shocks). The corresponding survival and the mortality rate functions are derived and analyzed. The general approach is used for justification of the Gompertz law of human mortality.

  3. Synthesis and initial evaluation of YM-08, a blood-brain barrier permeable derivative of the heat shock protein 70 (Hsp70) inhibitor MKT-077, which reduces tau levels.

    PubMed

    Miyata, Yoshinari; Li, Xiaokai; Lee, Hsiu-Fang; Jinwal, Umesh K; Srinivasan, Sharan R; Seguin, Sandlin P; Young, Zapporah T; Brodsky, Jeffrey L; Dickey, Chad A; Sun, Duxin; Gestwicki, Jason E

    2013-06-19

    The molecular chaperone, heat shock protein 70 (Hsp70), is an emerging drug target for treating neurodegenerative tauopathies. We recently found that one promising Hsp70 inhibitor, MKT-077, reduces tau levels in cellular models. However, MKT-077 does not penetrate the blood-brain barrier (BBB), limiting its use as either a clinical candidate or probe for exploring Hsp70 as a drug target in the central nervous system (CNS). We hypothesized that replacing the cationic pyridinium moiety in MKT-077 with a neutral pyridine might improve its clogP and enhance its BBB penetrance. To test this idea, we designed and synthesized YM-08, a neutral analogue of MKT-077. Like the parent compound, YM-08 bound to Hsp70 in vitro and reduced phosphorylated tau levels in cultured brain slices. Pharmacokinetic evaluation in CD1 mice showed that YM-08 crossed the BBB and maintained a brain/plasma (B/P) value of ∼0.25 for at least 18 h. Together, these studies suggest that YM-08 is a promising scaffold for the development of Hsp70 inhibitors suitable for use in the CNS.

  4. Probabilistic assessment of smart composite structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Shiao, Michael C.

    1994-01-01

    A composite wing with spars and bulkheads is used to demonstrate the effectiveness of probabilistic assessment of smart composite structures to control uncertainties in distortions and stresses. Results show that a smart composite wing can be controlled to minimize distortions and to have specified stress levels in the presence of defects. Structural responses such as changes in angle of attack, vertical displacements, and stress in the control and controlled plies are probabilistically assessed to quantify their respective uncertainties. Sensitivity factors are evaluated to identify those parameters that have the greatest influence on a specific structural response. Results show that smart composite structures can be configured to control both distortions and ply stresses to satisfy specified design requirements.

  5. Exact and Approximate Probabilistic Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem

    2014-01-01

    Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.

  6. Probabilistic Methods for Structural Reliability and Risk

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2010-01-01

    A probabilistic method is used to evaluate the structural reliability and risk of select metallic and composite structures. The method is a multiscale, multifunctional and it is based on the most elemental level. A multifactor interaction model is used to describe the material properties which are subsequently evaluated probabilistically. The metallic structure is a two rotor aircraft engine, while the composite structures consist of laminated plies (multiscale) and the properties of each ply are the multifunctional representation. The structural component is modeled by finite element. The solution method for structural responses is obtained by an updated simulation scheme. The results show that the risk for the two rotor engine is about 0.0001 and the composite built-up structure is also 0.0001.

  7. Probabilistic Seismic Risk Model for Western Balkans

    NASA Astrophysics Data System (ADS)

    Stejskal, Vladimir; Lorenzo, Francisco; Pousse, Guillaume; Radovanovic, Slavica; Pekevski, Lazo; Dojcinovski, Dragi; Lokin, Petar; Petronijevic, Mira; Sipka, Vesna

    2010-05-01

    A probabilistic seismic risk model for insurance and reinsurance purposes is presented for an area of Western Balkans, covering former Yugoslavia and Albania. This territory experienced many severe earthquakes during past centuries producing significant damage to many population centres in the region. The highest hazard is related to external Dinarides, namely to the collision zone of the Adriatic plate. The model is based on a unified catalogue for the region and a seismic source model consisting of more than 30 zones covering all the three main structural units - Southern Alps, Dinarides and the south-western margin of the Pannonian Basin. A probabilistic methodology using Monte Carlo simulation was applied to generate the hazard component of the model. Unique set of damage functions based on both loss experience and engineering assessments is used to convert the modelled ground motion severity into the monetary loss.

  8. Modelling default and likelihood reasoning as probabilistic

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1990-01-01

    A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. 'Likely' and 'by default' are in fact treated as duals in the same sense as 'possibility' and 'necessity'. To model these four forms probabilistically, a logic QDP and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequence results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.

  9. Probabilistic Methods for Structural Reliability and Risk

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2008-01-01

    A probabilistic method is used to evaluate the structural reliability and risk of select metallic and composite structures. The method is a multiscale, multifunctional and it is based on the most elemental level. A multi-factor interaction model is used to describe the material properties which are subsequently evaluated probabilistically. The metallic structure is a two rotor aircraft engine, while the composite structures consist of laminated plies (multiscale) and the properties of each ply are the multifunctional representation. The structural component is modeled by finite element. The solution method for structural responses is obtained by an updated simulation scheme. The results show that the risk for the two rotor engine is about 0.0001 and the composite built-up structure is also 0.0001.

  10. Probabilistic Methods for Structural Design and Reliability

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Whitlow, Woodrow, Jr. (Technical Monitor)

    2002-01-01

    This report describes a formal method to quantify structural damage tolerance and reliability in the presence of a multitude of uncertainties in turbine engine components. The method is based at the material behavior level where primitive variables with their respective scatter ranges are used to describe behavior. Computational simulation is then used to propagate the uncertainties to the structural scale where damage tolerance and reliability are usually specified. Several sample cases are described to illustrate the effectiveness, versatility, and maturity of the method. Typical results from this method demonstrate, that it is mature and that it can be used to probabilistically evaluate turbine engine structural components. It may be inferred from the results that the method is suitable for probabilistically predicting the remaining life in aging or in deteriorating structures, for making strategic projections and plans, and for achieving better, cheaper, faster products that give competitive advantages in world markets.

  11. Probabilistic Analysis of Gas Turbine Field Performance

    NASA Technical Reports Server (NTRS)

    Gorla, Rama S. R.; Pai, Shantaram S.; Rusick, Jeffrey J.

    2002-01-01

    A gas turbine thermodynamic cycle was computationally simulated and probabilistically evaluated in view of the several uncertainties in the performance parameters, which are indices of gas turbine health. Cumulative distribution functions and sensitivity factors were computed for the overall thermal efficiency and net specific power output due to the thermodynamic random variables. These results can be used to quickly identify the most critical design variables in order to optimize the design, enhance performance, increase system availability and make it cost effective. The analysis leads to the selection of the appropriate measurements to be used in the gas turbine health determination and to the identification of both the most critical measurements and parameters. Probabilistic analysis aims at unifying and improving the control and health monitoring of gas turbine aero-engines by increasing the quality and quantity of information available about the engine's health and performance.

  12. Probabilistic Assessment of National Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Shiao, M.; Chamis, C. C.

    1996-01-01

    A preliminary probabilistic structural assessment of the critical section of National Wind Tunnel (NWT) is performed using NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) computer code. Thereby, the capabilities of NESSUS code have been demonstrated to address reliability issues of the NWT. Uncertainties in the geometry, material properties, loads and stiffener location on the NWT are considered to perform the reliability assessment. Probabilistic stress, frequency, buckling, fatigue and proof load analyses are performed. These analyses cover the major global and some local design requirements. Based on the assumed uncertainties, the results reveal the assurance of minimum 0.999 reliability for the NWT. Preliminary life prediction analysis results show that the life of the NWT is governed by the fatigue of welds. Also, reliability based proof test assessment is performed.

  13. Significance testing as perverse probabilistic reasoning

    PubMed Central

    2011-01-01

    Truth claims in the medical literature rely heavily on statistical significance testing. Unfortunately, most physicians misunderstand the underlying probabilistic logic of significance tests and consequently often misinterpret their results. This near-universal misunderstanding is highlighted by means of a simple quiz which we administered to 246 physicians at two major academic hospitals, on which the proportion of incorrect responses exceeded 90%. A solid understanding of the fundamental concepts of probability theory is becoming essential to the rational interpretation of medical information. This essay provides a technically sound review of these concepts that is accessible to a medical audience. We also briefly review the debate in the cognitive sciences regarding physicians' aptitude for probabilistic inference. PMID:21356064

  14. Optical methods for determining the shock Hugoniot of Solids

    NASA Astrophysics Data System (ADS)

    Svingala, Forrest; Hargather, Michael; Settles, Gary

    2013-06-01

    Traditionally, the shock Hugoniot is measured on a point-bypoint basis by a series of high-velocity impact experiments. Observations are typically confined to pointwise pressure or velocity measurements at the free-surfaces of the sample. In this work, shock waves are initiated in transparent polyurethane and opaque polyurea samples using exploding bridgewires, aluminum ballistic projectiles, and gram-scale explosive charges. Shock waves and material motion are observed optically by shadowgraphy using a high-speed-digital camera recording at up to 106 frames/s. Ballistic impact, producing a constant-strength shock wave, is combined with these optical techniques to obtain a single shock Hugoniot point per test. A gram-scale explosive charge produces a shock wave in the material sample that is initially strong, but attenuates as it transits the polymer sample. With optical access to the entire sample, multiple shock and particle velocity combinations may be observed in a single test, allowing the measurement of a shock Hugoniot curve in fewer experiments than by traditional methods. These techniques produce data in general agreement with an extrapolation of published Hugoniot data for polyurethane and polyurea. Work funded by the Office of Naval Research.

  15. Computational Study of Shock-Associated Noise Characteristics Using LES

    NASA Astrophysics Data System (ADS)

    Liu, J.; Corrigan, A.; Kailasanath, K.; Heeb, N.; Munday, D.; Gutmark, E.

    2013-11-01

    Shock-associated noise generation has been investigated by using large-eddy simulations to compute jet flows at an underexpanded jet condition with three jet temperatures. To better understand shock-associated noise generation, shock-free jets with the same fully expanded jet conditions have also been simulated. The predictions agree well with the available experimental data in both the near and far field. It is found that shock cells at this underexpanded jet condition have little impact on the jet core length and the turbulence kinetic energy distribution, whereas the heating effect has a much larger impact by increasing the initial shear-layer spreading and shortening the jet core length. Shock-associated noise dominates in the upstream direction, and the broadband peak frequencies move to higher values in downstream direction. This frequency increase is initially small in the upstream direction, but becomes much larger in the downstream direction. In addition, it is found that the heating effect increases the broadband peak frequency. Overall the heating effect increases the mixing noise and slightly reduces the shock-associated noise. This reduces the difference between the shock-containing jets and the shock-free jets as the temperature increases. This research has been sponsored by the Office of Naval Research (ONR).

  16. Optimization on the focusing of multiple shock waves

    NASA Astrophysics Data System (ADS)

    Qiu, Shi; Eliasson, Veronica

    2016-11-01

    Focusing of multiple shock waves can lead to extreme thermodynamic conditions, which are desired for applications like shock wave lithotripsy and inertial confinement fusion. To study shock focusing effects, multiple energy sources have been placed in a circular pattern around an intended target, while the distance between each source and the target is fixed. All the sources are set to release the same amount of energy at the same time in order to create multiple identical shock waves. The object is to optimize the thermodynamic conditions at the target by rearranging the initial placement of each source. However, dealing with this optimization problem can be challenging due to the high computational cost introduced by solving the Euler equations. To avoid this issue, both numerical and analytical methods have been applied to handle shock focusing more efficiently. A numerical method, an approximate theory named Geometrical Shock Dynamics (GSD), has been utilized to describe the motion of shock. Using an analytical method, a transition curve between regular and irregular reflection has been employed to predict shock interactions. Results show that computational cost can be reduced dramatically by combining GSD and a transition curve. In addition, optimization results based on varying initial setups is discussed.

  17. Simulated, Theoretical and Experimental Shock Trajectories in Cylindrical Geometry

    NASA Astrophysics Data System (ADS)

    Kanzleiter, Randall; Atchison, Walter; Bowers, Richard; Guzik, Joyce

    2001-06-01

    The current work compares computations and similarity relations for convergent shocks with experimental data from cylindrical implosions on the Shiva Star capacitor bank at AFRL. These experiments consisted of a solid cylindrical aluminum liner that is magnetically imploded onto a central target. The central target consists of an inner Lucite cylinder surrounded by an outer Sn layer. Shock propagation within the Lucite is measured to provide comparisons between simulations and theory of convergent shocks. Target design utilized the adaptive mesh refinement (AMR) Eulerian hydrodynamics code RAGE in 2- and 3D. 1D models of the solid liner utilizing the RAVEN MHD code set initial liner/target interaction parameters, which are then used as initial conditions for the RAGE calculations. At liner/target impact, a convergent shock is generated that drives subsequent hydrodynamics experiments. In concentric targets, shocks will converge on axis, characterizing the symmetry of the liner driver. By shifting the Lucite target center away from the liner symmetry axis, variations in shock propagation velocity generate off-center shock convergence. Comparison of experimentally measured and simulated shock trajectories will be discussed as will convergence effects associated with cylindrical geometry. Efforts are currently underway to compare equation-of-state effects by utilizing a Gruneisen EOS instead of the original SESAME tables. Radial convergence is examined through comparisons with similarity solutions in cylindrical geometry.

  18. A nonlinear investigation of corrugation instabilities in magnetic accretion shocks

    NASA Astrophysics Data System (ADS)

    Ernst, Scott

    2011-05-01

    Accretion shock waves are present in many important astrophysical systems and have been a focus of research for decades. These investigations provide a large body of understanding as to the nature, characteristics, and evolutionary behaviors of accretion shock waves over a wide range of conditions. However, largely absent are investigations into the properties of accretion shock waves in the presence of strong magnetic fields. In such cases these strong magnetic fields can significantly alter the stability behaviors and evolution of the accretion shock wave through the production and propagation of magnetic waves as well as magnetically constrained advection. With strong magnetic fields likely found in a number of accretion shock systems, such as compact binary and protostellar systems, a better understanding of the behaviors of magnetic accretion shock waves is needed. A new magnetohydrodynamics simulation tool, IMOGEN, was developed to carry out an investigation of instabilities in strong, slow magnetic accretion shocks by modelling their long-term, nonlinear evolution. IMOGEN implements a relaxed, second-order, total variation diminishing, monotonic upwind scheme for conservation laws and incorporates a staggered-grid constrained transport scheme for magnetic advection. Through the simulated evolution of magnetic accretion shocks over a wide range of initial conditions, it has been shown, for sufficiently high magnetic field strengths, that magnetic accretion shocks are generally susceptible to corrugation instabilities, which arise in the presence of perturbations of the initial shock front. As these corrugation instabilities grow, they manifestas magnetic wave propagation in the upstream region of the accretion column, which propagate away from the accretion shock front, and as density columns, or fingers, that grow into the higher density downstream flow, defined and constrained by current loops created during the early evolution of the instability.

  19. Experimental shock metamorphism of maximum microcline

    NASA Technical Reports Server (NTRS)

    Robertson, P. B.

    1975-01-01

    A series of recovery experiments are conducted to study the behavior of single-crystal perthitic maximum microcline shock-loaded to a peak pressure of 417 kbar. Microcline is found to deform in a manner similar to quartz and other alkali feldspars. It is observed that shock-induced cleavages occur initially at or slightly below the Hugoniot elastic limit (60-85 kbar), that shock-induced rather than thermal disordering begins above the Hugoniot elastic limit, and that all types of planar elements form parallel to crystallographic planes of low Miller indices. When increasing pressure, it is found that bulk density, refractive indices, and birefringence of the recovered material decrease and approach diaplectic glass values, whereas disappearance and weakening of reflections in Debye-Sherrer patterns are due to disordering of the feldspar lattice.

  20. Universal shocks in random matrix theory.

    PubMed

    Blaizot, Jean-Paul; Nowak, Maciej A

    2010-11-01

    We link the appearance of universal kernels in random matrix ensembles to the phenomenon of shock formation in some fluid dynamical equations. Such equations are derived from Dyson's random walks after a proper rescaling of the time. In the case of the gaussian unitary ensemble, on which we focus in this paper, we show that the characteristics polynomials and their inverse evolve according to a viscid Burgers equation with an effective "spectral viscosity" ν(s)=1/2N, where N is the size of the matrices. We relate the edge of the spectrum of eigenvalues to the shock that naturally appears in the Burgers equation for appropriate initial conditions, thereby suggesting a connection between the well-known microscopic universality of random matrix theory and the universal properties of the solution of the Burgers equation in the vicinity of a shock.

  1. Violent Reactions from Non-Shock Stimuli

    NASA Astrophysics Data System (ADS)

    Sandusky, H. W.; Granholm, R. H.

    2007-12-01

    Most reactions are thermally initiated, whether from direct heating or dissipation of energy from mechanical, shock, or electrical stimuli. For other than prompt shock initiation, the reaction must spread through porosity or over large surface area to become more violent than just rupturing any confinement. While burning rates are important, high-strain mechanical properties are nearly so, either by reducing existing porosity or generating additional surface area through fracture. In studies of deflagration-to-detonation transition (DDT), it has been shown that reaction violence is reduced if the binder is softened, either by raising the initial temperature or adding a solvent. In studies of cavity collapse in explosives, those with soft rubber binders will deform and undergo mild reaction whereas those with stiff binders will fracture and generate additional surface area for a violent event.

  2. Toxic shock syndrome

    MedlinePlus

    ... by a toxin produced by some types of staphylococcus bacteria. A similar problem, called toxic shock-like ... men. Risk factors include: Recent childbirth Infection with Staphylococcus aureus ( S aureus ), commonly called a Staph infection Foreign ...

  3. Multiscale/Multifunctional Probabilistic Composite Fatigue

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2010-01-01

    A multilevel (multiscale/multifunctional) evaluation is demonstrated by applying it to three different sample problems. These problems include the probabilistic evaluation of a space shuttle main engine blade, an engine rotor and an aircraft wing. The results demonstrate that the blade will fail at the highest probability path, the engine two-stage rotor will fail by fracture at the rim and the aircraft wing will fail at 109 fatigue cycles with a probability of 0.9967.

  4. Bayesian Probabilistic Projection of International Migration.

    PubMed

    Azose, Jonathan J; Raftery, Adrian E

    2015-10-01

    We propose a method for obtaining joint probabilistic projections of migration for all countries, broken down by age and sex. Joint trajectories for all countries are constrained to satisfy the requirement of zero global net migration. We evaluate our model using out-of-sample validation and compare point projections to the projected migration rates from a persistence model similar to the method used in the United Nations' World Population Prospects, and also to a state-of-the-art gravity model.

  5. Probabilistically teleporting arbitrary two-qubit states

    NASA Astrophysics Data System (ADS)

    Choudhury, Binayak S.; Dhara, Arpan

    2016-12-01

    In this paper we make use of two non-maximally entangled three-qubit channels for probabilistically teleporting arbitrary two particle states from a sender to a receiver. We also calculate the success probability of the teleportation. In the protocol we use two measurements of which one is a POVM and the other is a projective measurement. The POVM provides the protocol with operational advantage.

  6. Probabilistic Anisotropic Failure Criteria for Composite Materials.

    DTIC Science & Technology

    1987-12-01

    worksheets were based on Microsoft Excel software. 55 55 ’. 2.’ 𔃼..’. -.. ’-,’€’.’’.’ :2.,2..’..’.2.’.’.,’.." .𔃼.. .2...analytically described the failure cri - terion and probabilistic failure states of a anisotropic composite in a combined stress state. Strength...APPENDIX F RELIABILITY/FAILURE FUNCTION WORKSHEET ........... 76 APPENDIX G PERCENTILE STRENGTH WORKSHEET ....................... 80 LIST OF

  7. Maritime Threat Detection Using Probabilistic Graphical Models

    DTIC Science & Technology

    2012-01-01

    CRF, unlike an HMM, can represent local features, and does not require feature concatenation. MLNs For MLNs, we used Alchemy ( Alchemy 2011), an...open source statistical relational learning and probabilistic inferencing package. Alchemy supports generative and discriminative weight learning, and...that Alchemy creates a new formula for every possible combination of the values for a1 and a2 that fit the type specified in their predicate

  8. Probabilistic, meso-scale flood loss modelling

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2016-04-01

    Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.

  9. The probabilistic structure of planetary contamination models

    NASA Technical Reports Server (NTRS)

    Harrison, J. M.; North, W. D.

    1973-01-01

    The analytical basis for planetary quarantine standards and procedures is presented. The heirarchy of planetary quarantine decisions is explained and emphasis is placed on the determination of mission specifications to include sterilization. The influence of the Sagan-Coleman probabilistic model of planetary contamination on current standards and procedures is analyzed. A classical problem in probability theory which provides a close conceptual parallel to the type of dependence present in the contamination problem is presented.

  10. Probabilistic Network Approach to Decision-Making

    NASA Astrophysics Data System (ADS)

    Nicolis, Grégoire; Nicolis, Stamatios C.

    2015-06-01

    A probabilistic approach to decision-making is developed in which the states of the underlying stochastic process, assumed to be of the Markov type, represent the competing options. The principal parameters determining the dominance of a particular option versus the others are identified and the transduction of information associated to the transitions between states is quantified using a set of entropy-like quantities.

  11. Dynamic competitive probabilistic principal components analysis.

    PubMed

    López-Rubio, Ezequiel; Ortiz-DE-Lazcano-Lobato, Juan Miguel

    2009-04-01

    We present a new neural model which extends the classical competitive learning (CL) by performing a Probabilistic Principal Components Analysis (PPCA) at each neuron. The model also has the ability to learn the number of basis vectors required to represent the principal directions of each cluster, so it overcomes a drawback of most local PCA models, where the dimensionality of a cluster must be fixed a priori. Experimental results are presented to show the performance of the network with multispectral image data.

  12. Probabilistic structural analysis methods and applications

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Wu, Y.-T.; Dias, B.; Rajagopal, K. R.

    1988-01-01

    An advanced algorithm for simulating the probabilistic distribution of structural responses due to statistical uncertainties in loads, geometry, material properties, and boundary conditions is reported. The method effectively combines an advanced algorithm for calculating probability levels for multivariate problems (fast probability integration) together with a general-purpose finite-element code for stress, vibration, and buckling analysis. Application is made to a space propulsion system turbine blade for which the geometry and material properties are treated as random variables.

  13. Extravehicular Activity Probabilistic Risk Assessment Overview for Thermal Protection System Repair on the Hubble Space Telescope Servicing Mission

    NASA Technical Reports Server (NTRS)

    Bigler, Mark; Canga, Michael A.; Duncan, Gary

    2010-01-01

    The Shuttle Program initiated an Extravehicular Activity (EVA) Probabilistic Risk Assessment (PRA) to assess the risks associated with performing a Shuttle Thermal Protection System (TPS) repair during the Space Transportation System (STS)-125 Hubble repair mission as part of risk trades between TPS repair and crew rescue.

  14. Early Treatment in Shock

    DTIC Science & Technology

    2011-06-01

    jasminoides Ellis). It is com- monly sold in the United States as an herbal supplement . In terms of crocetin’s fundamental mode of action in host... supplementation 15 can be beneficial in ameliorating the tissue damage produced following experimental 16 hemorrhagic shock. In the present investigation...experimental rat model of hemorrhagic shock. Our studies were designed to test two 19 hypotheses. First, L-arginine supplementation during resuscitation will

  15. Shock Properties of Kimberlite

    NASA Astrophysics Data System (ADS)

    Willmott, G. R.; Proud, W. G.; Field, J. E.

    2004-07-01

    Plate impact experiments have been performed on the igneous diamond-bearing matrix kimberlite. Longitudinal and lateral stresses were measured in the uniaxial strain regime using manganin stress gauges. The shock Hugoniot of the kimberlite has been characterized at axial stresses between 1 and 9 GPa. The kimberlite has a low impedance response when compared with similar data for other geological materials. The data indicate that the rock behaves inelastically above shock stresses of 1 GPa.

  16. Fluid therapy in shock.

    PubMed

    Mandell, D C; King, L G

    1998-05-01

    The goal of treatment for all types of shock is the improvement of tissue perfusion and oxygenation. The mainstay of therapy for hypovolemic and septic shock is the expansion of the intravascular volume by fluid administration, including crystalloids, colloids, and blood products. Frequent physical examinations and monitoring enable the clinician to determine the adequacy of tissue oxygenation and thus the success of the fluid therapy.

  17. "Smart" Electromechanical Shock Absorber

    NASA Technical Reports Server (NTRS)

    Stokes, Lebarian; Glenn, Dean C.; Carroll, Monty B.

    1989-01-01

    Shock-absorbing apparatus includes electromechanical actuator and digital feedback control circuitry rather than springs and hydraulic damping as in conventional shock absorbers. Device not subject to leakage and requires little or no maintenance. Attenuator parameters adjusted in response to sensory feedback and predictive algorithms to obtain desired damping characteristic. Device programmed to decelerate slowly approaching vehicle or other large object according to prescribed damping characteristic.

  18. Catecholamines in shock.

    PubMed

    Alho, A; Jäättelä, A; Lahdensuu, M; Rokkanen, P; Avikainen, V; Karaharju, E; Tervo, T; Lepistö, P

    1977-06-01

    The role of endogenous catecholamines in various clinical shock and stress states is reviewed; the effects, especially on the peripheral circulation, of catecholamine secretion are the same independent of the cause. Risks of using sympathomimetic agents in the treatment of shock are evaluated. A prolonged noradrenaline activity is to be expected in surgical stress states, e.g. multiple injuries, fat embolism syndrome, burns and infections; therapeutic approaches to minimize the sympathoadrenal activity are outlined.

  19. Asteroid Risk Assessment: A Probabilistic Approach.

    PubMed

    Reinhardt, Jason C; Chen, Xi; Liu, Wenhao; Manchev, Petar; Paté-Cornell, M Elisabeth

    2016-02-01

    Following the 2013 Chelyabinsk event, the risks posed by asteroids attracted renewed interest, from both the scientific and policy-making communities. It reminded the world that impacts from near-Earth objects (NEOs), while rare, have the potential to cause great damage to cities and populations. Point estimates of the risk (such as mean numbers of casualties) have been proposed, but because of the low-probability, high-consequence nature of asteroid impacts, these averages provide limited actionable information. While more work is needed to further refine its input distributions (e.g., NEO diameters), the probabilistic model presented in this article allows a more complete evaluation of the risk of NEO impacts because the results are distributions that cover the range of potential casualties. This model is based on a modularized simulation that uses probabilistic inputs to estimate probabilistic risk metrics, including those of rare asteroid impacts. Illustrative results of this analysis are presented for a period of 100 years. As part of this demonstration, we assess the effectiveness of civil defense measures in mitigating the risk of human casualties. We find that they are likely to be beneficial but not a panacea. We also compute the probability-but not the consequences-of an impact with global effects ("cataclysm"). We conclude that there is a continued need for NEO observation, and for analyses of the feasibility and risk-reduction effectiveness of space missions designed to deflect or destroy asteroids that threaten the Earth.

  20. Probabilistic Graph Layout for Uncertain Network Visualization.

    PubMed

    Schulz, Christoph; Nocaj, Arlind; Goertler, Jochen; Deussen, Oliver; Brandes, Ulrik; Weiskopf, Daniel

    2017-01-01

    We present a novel uncertain network visualization technique based on node-link diagrams. Nodes expand spatially in our probabilistic graph layout, depending on the underlying probability distributions of edges. The visualization is created by computing a two-dimensional graph embedding that combines samples from the probabilistic graph. A Monte Carlo process is used to decompose a probabilistic graph into its possible instances and to continue with our graph layout technique. Splatting and edge bundling are used to visualize point clouds and network topology. The results provide insights into probability distributions for the entire network-not only for individual nodes and edges. We validate our approach using three data sets that represent a wide range of network types: synthetic data, protein-protein interactions from the STRING database, and travel times extracted from Google Maps. Our approach reveals general limitations of the force-directed layout and allows the user to recognize that some nodes of the graph are at a specific position just by chance.