Evaluating average and atypical response in radiation effects simulations
NASA Astrophysics Data System (ADS)
Weller, R. A.; Sternberg, A. L.; Massengill, L. W.; Schrimpf, R. D.; Fleetwood, D. M.
2003-12-01
We examine the limits of performing single-event simulations using pre-averaged radiation events. Geant4 simulations show the necessity, for future devices, to supplement current methods with ensemble averaging of device-level responses to physically realistic radiation events. Initial Monte Carlo simulations have generated a significant number of extremal events in local energy deposition. These simulations strongly suggest that proton strikes of sufficient energy, even those that initiate purely electronic interactions, can initiate device response capable in principle of producing single event upset or microdose damage in highly scaled devices.
Anthology of the Development of Radiation Transport Tools as Applied to Single Event Effects
NASA Astrophysics Data System (ADS)
Reed, R. A.; Weller, R. A.; Akkerman, A.; Barak, J.; Culpepper, W.; Duzellier, S.; Foster, C.; Gaillardin, M.; Hubert, G.; Jordan, T.; Jun, I.; Koontz, S.; Lei, F.; McNulty, P.; Mendenhall, M. H.; Murat, M.; Nieminen, P.; O'Neill, P.; Raine, M.; Reddell, B.; Saigné, F.; Santin, G.; Sihver, L.; Tang, H. H. K.; Truscott, P. R.; Wrobel, F.
2013-06-01
This anthology contains contributions from eleven different groups, each developing and/or applying Monte Carlo-based radiation transport tools to simulate a variety of effects that result from energy transferred to a semiconductor material by a single particle event. The topics span from basic mechanisms for single-particle induced failures to applied tasks like developing websites to predict on-orbit single event failure rates using Monte Carlo radiation transport tools.
Effects of cosmic rays on single event upsets
NASA Technical Reports Server (NTRS)
Venable, D. D.; Zajic, V.; Lowe, C. W.; Olidapupo, A.; Fogarty, T. N.
1989-01-01
Assistance was provided to the Brookhaven Single Event Upset (SEU) Test Facility. Computer codes were developed for fragmentation and secondary radiation affecting Very Large Scale Integration (VLSI) in space. A computer controlled CV (HP4192) test was developed for Terman analysis. Also developed were high speed parametric tests which are independent of operator judgment and a charge pumping technique for measurement of D(sub it) (E). The X-ray secondary effects, and parametric degradation as a function of dose rate were simulated. The SPICE simulation of static RAMs with various resistor filters was tested.
Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less
NASA Astrophysics Data System (ADS)
Yoshida, Eiji; Tashima, Hideaki; Yamaya, Taiga
2014-11-01
In a conventional PET scanner, coincidence events are measured with a limited energy window for detection of photoelectric events in order to reject Compton scatter events that occur in a patient, but Compton scatter events caused in detector crystals are also rejected. Scatter events within the patient causes scatter coincidences, but inter crystal scattering (ICS) events have useful information for determining an activity distribution. Some researchers have reported the feasibility of PET scanners based on a Compton camera for tracing ICS into the detector. However, these scanners require expensive semiconductor detectors for high-energy resolution. In the Anger-type block detector, single photons interacting with multiple detectors can be obtained for each interacting position and complete information can be gotten just as for photoelectric events in the single detector. ICS events in the single detector have been used to get coincidence, but single photons interacting with multiple detectors have not been used to get coincidence. In this work, we evaluated effect of sensitivity improvement using Compton kinetics in several types of DOI-PET scanners. The proposed method promises to improve the sensitivity using coincidence events of single photons interacting with multiple detectors, which are identified as the first interaction (FI). FI estimation accuracy can be improved to determine FI validity from the correlation between Compton scatter angles calculated on the coincidence line-of-response. We simulated an animal PET scanner consisting of 42 detectors. Each detector block consists of three types of scintillator crystals (LSO, GSO and GAGG). After the simulation, coincidence events are added as information for several depth-of-interaction (DOI) resolutions. From the simulation results, we concluded the proposed method promises to improve the sensitivity considerably when effective atomic number of a scintillator is low. Also, we showed that FI estimate accuracy is improved, as DOI resolution is high.
Discrete-Event Simulation Unmasks the Quantum Cheshire Cat
NASA Astrophysics Data System (ADS)
Michielsen, Kristel; Lippert, Thomas; Raedt, Hans De
2017-05-01
It is shown that discrete-event simulation accurately reproduces the experimental data of a single-neutron interferometry experiment [T. Denkmayr {\\sl et al.}, Nat. Commun. 5, 4492 (2014)] and provides a logically consistent, paradox-free, cause-and-effect explanation of the quantum Cheshire cat effect without invoking the notion that the neutron and its magnetic moment separate. Describing the experimental neutron data using weak-measurement theory is shown to be useless for unravelling the quantum Cheshire cat effect.
Laser Scanner Tests For Single-Event Upsets
NASA Technical Reports Server (NTRS)
Kim, Quiesup; Soli, George A.; Schwartz, Harvey R.
1992-01-01
Microelectronic advanced laser scanner (MEALS) is opto/electro/mechanical apparatus for nondestructive testing of integrated memory circuits, logic circuits, and other microelectronic devices. Multipurpose diagnostic system used to determine ultrafast time response, leakage, latchup, and electrical overstress. Used to simulate some of effects of heavy ions accelerated to high energies to determine susceptibility of digital device to single-event upsets.
NASA Astrophysics Data System (ADS)
Li, Jiaqiang; Choutko, Vitaly; Xiao, Liyi
2018-03-01
Based on the collection of error data from the Alpha Magnetic Spectrometer (AMS) Digital Signal Processors (DSP), on-orbit Single Event Upsets (SEUs) of the DSP program memory are analyzed. The daily error distribution and time intervals between errors are calculated to evaluate the reliability of the system. The particle density distribution of International Space Station (ISS) orbit is presented and the effects from the South Atlantic Anomaly (SAA) and the geomagnetic poles are analyzed. The impact of solar events on the DSP program memory is carried out combining data analysis and Monte Carlo simulation (MC). From the analysis and simulation results, it is concluded that the area corresponding to the SAA is the main source of errors on the ISS orbit. Solar events can also cause errors on DSP program memory, but the effect depends on the on-orbit particle density.
Single event test methodology for integrated optoelectronics
NASA Technical Reports Server (NTRS)
Label, Kenneth A.; Cooley, James A.; Stassinopoulos, E. G.; Marshall, Paul; Crabtree, Christina
1993-01-01
A single event upset (SEU), defined as a transient or glitch on the output of a device, and its applicability to integrated optoelectronics are discussed in the context of spacecraft design and the need for more than a bit error rate viewpoint for testing and analysis. A methodology for testing integrated optoelectronic receivers and transmitters for SEUs is presented, focusing on the actual test requirements and system schemes needed for integrated optoelectronic devices. Two main causes of single event effects in the space environment, including protons and galactic cosmic rays, are considered along with ground test facilities for simulating the space environment.
Single-Event Effects in High-Frequency Linear Amplifiers: Experiment and Analysis
NASA Astrophysics Data System (ADS)
Zeinolabedinzadeh, Saeed; Ying, Hanbin; Fleetwood, Zachary E.; Roche, Nicolas J.-H.; Khachatrian, Ani; McMorrow, Dale; Buchner, Stephen P.; Warner, Jeffrey H.; Paki-Amouzou, Pauline; Cressler, John D.
2017-01-01
The single-event transient (SET) response of two different silicon-germanium (SiGe) X-band (8-12 GHz) low noise amplifier (LNA) topologies is fully investigated in this paper. The two LNAs were designed and implemented in 130nm SiGe HBT BiCMOS process technology. Two-photon absorption (TPA) laser pulses were utilized to induce transients within various devices in these LNAs. Impulse response theory is identified as a useful tool for predicting the settling behavior of the LNAs subjected to heavy ion strikes. Comprehensive device and circuit level modeling and simulations were performed to accurately simulate the behavior of the circuits under ion strikes. The simulations agree well with TPA measurements. The simulation, modeling and analysis presented in this paper can be applied for any other circuit topologies for SET modeling and prediction.
Thermomechanical Stresses Analysis of a Single Event Burnout Process
NASA Astrophysics Data System (ADS)
Tais, Carlos E.; Romero, Eduardo; Demarco, Gustavo L.
2009-06-01
This work analyzes the thermal and mechanical effects arising in a power Diffusion Metal Oxide Semiconductor (DMOS) during a Single Event Burnout (SEB) process. For studying these effects we propose a more detailed simulation structure than the previously used by other authors, solving the mathematical models by means of the Finite Element Method. We use a cylindrical heat generation region, with 5 W, 10 W, 50 W and 100 W for emulating the thermal phenomena occurring during SEB processes, avoiding the complexity of the mathematical treatment of the ion-semiconductor interaction.
Hierarchical CAD Tools for Radiation Hardened Mixed Signal Electronic Circuits
2005-01-28
11 Figure 3: Schematic of Analog and Digital Components 12 Figure 4: Dose Rate Syntax 14 Figure 5: Single Event Effects (SEE) Syntax 15 Figure 6...Harmony-AMS simulation of a Digital Phase Locked Loop 19 Figure 10: SEE results from DPLL Simulation 20 Figure 11: Published results used for validation...analog and digital circuitry. Combining the analog and digital elements onto a single chip has several advantages, but also creates unique challenges
Optimization of Single-Sided Charge-Sharing Strip Detectors
NASA Technical Reports Server (NTRS)
Hamel, L.A.; Benoit, M.; Donmez, B.; Macri, J. R.; McConnell, M. L.; Ryan, J. M.; Narita, T.
2006-01-01
Simulation of the charge sharing properties of single-sided CZT strip detectors with small anode pads are presented. The effect of initial event size, carrier repulsion, diffusion, drift, trapping and detrapping are considered. These simulations indicate that such a detector with a 150 m pitch will provide good charge sharing between neighboring pads. This is supported by a comparison of simulations and measurements for a similar detector with a coarser pitch of 225 m that could not provide sufficient sharing. The performance of such a detector used as a gamma-ray imager is discussed.
Single Event Effects mitigation with TMRG tool
NASA Astrophysics Data System (ADS)
Kulis, S.
2017-01-01
Single Event Effects (SEE) are a major concern for integrated circuits exposed to radiation. There have been several techniques proposed to protect circuits against radiation-induced upsets. Among the others, the Triple Modular Redundancy (TMR) technique is one of the most popular. The purpose of the Triple Modular Redundancy Generator (TMRG) tool is to automatize the process of triplicating digital circuits freeing the designer from introducing the TMR code manually at the implementation stage. It helps to ensure that triplicated logic is maintained through the design process. Finally, the tool streamlines the process of introducing SEE in gate level simulations for final verification.
Stochastic summation of empirical Green's functions
Wennerberg, Leif
1990-01-01
Two simple strategies are presented that use random delay times for repeatedly summing the record of a relatively small earthquake to simulate the effects of a larger earthquake. The simulations do not assume any fault plane geometry or rupture dynamics, but realy only on the ω−2 spectral model of an earthquake source and elementary notions of source complexity. The strategies simulate ground motions for all frequencies within the bandwidth of the record of the event used as a summand. The first strategy, which introduces the basic ideas, is a single-stage procedure that consists of simply adding many small events with random time delays. The probability distribution for delays has the property that its amplitude spectrum is determined by the ratio of ω−2 spectra, and its phase spectrum is identically zero. A simple expression is given for the computation of this zero-phase scaling distribution. The moment rate function resulting from the single-stage simulation is quite simple and hence is probably not realistic for high-frequency (>1 Hz) ground motion of events larger than ML∼ 4.5 to 5. The second strategy is a two-stage summation that simulates source complexity with a few random subevent delays determined using the zero-phase scaling distribution, and then clusters energy around these delays to get an ω−2 spectrum for the sum. Thus, the two-stage strategy allows simulations of complex events of any size for which the ω−2 spectral model applies. Interestingly, a single-stage simulation with too few ω−2records to get a good fit to an ω−2 large-event target spectrum yields a record whose spectral asymptotes are consistent with the ω−2 model, but that includes a region in its spectrum between the corner frequencies of the larger and smaller events reasonably approximated by a power law trend. This spectral feature has also been discussed as reflecting the process of partial stress release (Brune, 1970), an asperity failure (Boatwright, 1984), or the breakdown of ω−2 scaling due to rupture significantly longer than the width of the seismogenic zone (Joyner, 1984).
Experimental and simulation studies of neutron-induced single-event burnout in SiC power diodes
NASA Astrophysics Data System (ADS)
Shoji, Tomoyuki; Nishida, Shuichi; Hamada, Kimimori; Tadano, Hiroshi
2014-01-01
Neutron-induced single-event burnouts (SEBs) of silicon carbide (SiC) power diodes have been investigated by white neutron irradiation experiments and transient device simulations. It was confirmed that a rapid increase in lattice temperature leads to formation of crown-shaped aluminum and cracks inside the device owing to expansion stress when the maximum lattice temperature reaches the sublimation temperature. SEB device simulation indicated that the peak lattice temperature is located in the vicinity of the n-/n+ interface and anode contact, and that the positions correspond to a hammock-like electric field distribution caused by the space charge effect. Moreover, the locations of the simulated peak lattice temperature agree closely with the positions of the observed destruction traces. Furthermore, it was theoretically demonstrated that the period of temperature increase of a SiC power device is two orders of magnitude less than that of a Si power device, using a thermal diffusion equation.
NASA Technical Reports Server (NTRS)
Tasca, D. M.
1981-01-01
Single event upset phenomena are discussed, taking into account cosmic ray induced errors in IIL microprocessors and logic devices, single event upsets in NMOS microprocessors, a prediction model for bipolar RAMs in a high energy ion/proton environment, the search for neutron-induced hard errors in VLSI structures, soft errors due to protons in the radiation belt, and the use of an ion microbeam to study single event upsets in microcircuits. Basic mechanisms in materials and devices are examined, giving attention to gamma induced noise in CCD's, the annealing of MOS capacitors, an analysis of photobleaching techniques for the radiation hardening of fiber optic data links, a hardened field insulator, the simulation of radiation damage in solids, and the manufacturing of radiation resistant optical fibers. Energy deposition and dosimetry is considered along with SGEMP/IEMP, radiation effects in devices, space radiation effects and spacecraft charging, EMP/SREMP, and aspects of fabrication, testing, and hardness assurance.
1986-09-30
4 . ~**..ft.. ft . - - - ft SI TABLES 9 I. SA32~40 Single Event Upset Test, 1140-MeV Krypton, 9/l8/8~4. . .. .. .. .. .. .16 II. CRUP Simulation...cosmic ray interaction analysis described in the remainder of this report were calculated using the CRUP computer code 3 modified for funneling. The... CRUP code requires, as inputs, the size of a depletion region specified as a retangular parallel piped with dimensions a 9 b S c, the effective funnel
Physical Processes and Applications of the Monte Carlo Radiative Energy Deposition (MRED) Code
NASA Astrophysics Data System (ADS)
Reed, Robert A.; Weller, Robert A.; Mendenhall, Marcus H.; Fleetwood, Daniel M.; Warren, Kevin M.; Sierawski, Brian D.; King, Michael P.; Schrimpf, Ronald D.; Auden, Elizabeth C.
2015-08-01
MRED is a Python-language scriptable computer application that simulates radiation transport. It is the computational engine for the on-line tool CRÈME-MC. MRED is based on c++ code from Geant4 with additional Fortran components to simulate electron transport and nuclear reactions with high precision. We provide a detailed description of the structure of MRED and the implementation of the simulation of physical processes used to simulate radiation effects in electronic devices and circuits. Extensive discussion and references are provided that illustrate the validation of models used to implement specific simulations of relevant physical processes. Several applications of MRED are summarized that demonstrate its ability to predict and describe basic physical phenomena associated with irradiation of electronic circuits and devices. These include effects from single particle radiation (including both direct ionization and indirect ionization effects), dose enhancement effects, and displacement damage effects. MRED simulations have also helped to identify new single event upset mechanisms not previously observed by experiment, but since confirmed, including upsets due to muons and energetic electrons.
Zhang, Wei; Zhao, Fei; Hoffmann, Ary A.; Ma, Chun-Sen
2013-01-01
Extremely hot events (usually involving a few hours at extreme high temperatures in summer) are expected to increase in frequency in temperate regions under global warming. The impact of these events is generally overlooked in insect population prediction, since they are unlikely to cause widespread mortality, however reproduction may be affected by them. In this study, we examined such stress effects in the diamondback moth, Plutella xylostella. We simulated a single extreme hot day (maximum of 40°C lasting for 3, 4 or 5 h) increasingly experienced under field conditions. This event had no detrimental effects on immediate mortality, copulation duration, mating success, longevity or lifetime fecundity, but stressed females produced 21% (after 3 or 4 h) fewer hatched eggs because of a decline in the number and hatching success of eggs laid on the first two days. These negative effects on reproduction were no longer evident in the following days. Male heat exposure led to a similar but smaller effect on fertile egg production, and exposure extended pre-mating period in both sexes. Our results indicate that a single hot day can have detrimental effects on reproduction, particularly through maternal effects on egg hatching, and thereby influence the population dynamics of diamondback moth. PMID:24116081
Statistical Properties of SEE Rate Calculation in the Limits of Large and Small Event Counts
NASA Technical Reports Server (NTRS)
Ladbury, Ray
2007-01-01
This viewgraph presentation reviews the Statistical properties of Single Event Effects (SEE) rate calculations. The goal of SEE rate calculation is to bound the SEE rate, though the question is by how much. The presentation covers: (1) Understanding errors on SEE cross sections, (2) Methodology: Maximum Likelihood and confidence Contours, (3) Tests with Simulated data and (4) Applications.
Genetic consequences of sequential founder events by an island-colonizing bird.
Clegg, Sonya M; Degnan, Sandie M; Kikkawa, Jiro; Moritz, Craig; Estoup, Arnaud; Owens, Ian P F
2002-06-11
The importance of founder events in promoting evolutionary changes on islands has been a subject of long-running controversy. Resolution of this debate has been hindered by a lack of empirical evidence from naturally founded island populations. Here we undertake a genetic analysis of a series of historically documented, natural colonization events by the silvereye species-complex (Zosterops lateralis), a group used to illustrate the process of island colonization in the original founder effect model. Our results indicate that single founder events do not affect levels of heterozygosity or allelic diversity, nor do they result in immediate genetic differentiation between populations. Instead, four to five successive founder events are required before indices of diversity and divergence approach that seen in evolutionarily old forms. A Bayesian analysis based on computer simulation allows inferences to be made on the number of effective founders and indicates that founder effects are weak because island populations are established from relatively large flocks. Indeed, statistical support for a founder event model was not significantly higher than for a gradual-drift model for all recently colonized islands. Taken together, these results suggest that single colonization events in this species complex are rarely accompanied by severe founder effects, and multiple founder events and/or long-term genetic drift have been of greater consequence for neutral genetic diversity.
Radiation-Induced Transient Effects in Near Infrared Focal Plane Arrays
NASA Technical Reports Server (NTRS)
Reed, Robert A.; Pickel, J.; Marshall, P.; Waczynski, A.; McMurray, R.; Gee, G.; Polidan, E.; Johnson, S.; McKeivey, M.; Ennico, K.;
2004-01-01
This viewgraph presentation describes a test simulate the transient effects of cosmic ray impacts on near infrared focal plane arrays. The objectives of the test are to: 1) Characterize proton single events as function of energy and angle of incidence; 2) Measure charge spread (crosstalk) to adjacent pixels; 3) Assess transient recovery time.
Interpreting Space-Mission LET Requirements for SEGR in Power MOSFETs
NASA Technical Reports Server (NTRS)
Lauenstein, J. M.; Ladbury, R. L.; Batchelor, D. A.; Goldsman, N.; Kim, H. S.; Phan, A. M.
2010-01-01
A Technology Computer Aided Design (TCAD) simulation-based method is developed to evaluate whether derating of high-energy heavy-ion accelerator test data bounds the risk for single-event gate rupture (SEGR) from much higher energy on-orbit ions for a mission linear energy transfer (LET) requirement. It is shown that a typical derating factor of 0.75 applied to a single-event effect (SEE) response curve defined by high-energy accelerator SEGR test data provides reasonable on-orbit hardness assurance, although in a high-voltage power MOSFET, it did not bound the risk of failure.
NASA Technical Reports Server (NTRS)
Perez, Reinaldo J.
2011-01-01
Single Event Transients in analog and digital electronics from space generated high energetic nuclear particles can disrupt either temporarily and sometimes permanently the functionality and performance of electronics in space vehicles. This work first provides some insights into the modeling of SET in electronic circuits that can be used in SPICE-like simulators. The work is then directed to present methodologies, one of which was developed by this author, for the assessment of SET at different levels of integration in electronics, from the circuit level to the subsystem level.
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.; Brucker, G. J.
1992-01-01
This paper addresses the issues involved in radiation testing of devices and subsystems to obtain the data that are required to predict the performance and survivability of satellite systems for extended missions in space. The problems associated with space environmental simulations, or the lack thereof, in experiments intended to produce information to describe the degradation and behavior of parts and systems are discussed. Several types of radiation effects in semiconductor components are presented, as for example: ionization dose effects, heavy ion and proton induced Single Event Upsets (SEUs), and Single Event Transient Upsets (SETUs). Examples and illustrations of data relating to these ground testing issues are provided. The primary objective of this presentation is to alert the reader to the shortcomings, pitfalls, variabilities, and uncertainties in acquiring information to logically design electronic subsystems for use in satellites or space stations with long mission lifetimes, and to point out the weaknesses and deficiencies in the methods and procedures by which that information is obtained.
NASA Technical Reports Server (NTRS)
Howe, Christina L.; Weller, Robert A.; Reed, Robert A.; Sierawski, Brian D.; Marshall, Paul W.; Marshall, Cheryl J.; Mendenhall, Marcus H.; Schrimpf, Ronald D.
2007-01-01
The proton induced charge deposition in a well characterized silicon P-i-N focal plane array is analyzed with Monte Carlo based simulations. These simulations include all physical processes, together with pile up, to accurately describe the experimental data. Simulation results reveal important high energy events not easily detected through experiment due to low statistics. The effects of each physical mechanism on the device response is shown for a single proton energy as well as a full proton space flux.
Discretely Integrated Condition Event (DICE) Simulation for Pharmacoeconomics.
Caro, J Jaime
2016-07-01
Several decision-analytic modeling techniques are in use for pharmacoeconomic analyses. Discretely integrated condition event (DICE) simulation is proposed as a unifying approach that has been deliberately designed to meet the modeling requirements in a straightforward transparent way, without forcing assumptions (e.g., only one transition per cycle) or unnecessary complexity. At the core of DICE are conditions that represent aspects that persist over time. They have levels that can change and many may coexist. Events reflect instantaneous occurrences that may modify some conditions or the timing of other events. The conditions are discretely integrated with events by updating their levels at those times. Profiles of determinant values allow for differences among patients in the predictors of the disease course. Any number of valuations (e.g., utility, cost, willingness-to-pay) of conditions and events can be applied concurrently in a single run. A DICE model is conveniently specified in a series of tables that follow a consistent format and the simulation can be implemented fully in MS Excel, facilitating review and validation. DICE incorporates both state-transition (Markov) models and non-resource-constrained discrete event simulation in a single formulation; it can be executed as a cohort or a microsimulation; and deterministically or stochastically.
SRAM Based Re-programmable FPGA for Space Applications
NASA Technical Reports Server (NTRS)
Wang, J. J.; Sun, J. S.; Cronquist, B. E.; McCollum, J. L.; Speers, T. M.; Plants, W. C.; Katz, R. B.
1999-01-01
An SRAM (static random access memory)-based reprogrammable FPGA (field programmable gate array) is investigated for space applications. A new commercial prototype, named the RS family, was used as an example for the investigation. The device is fabricated in a 0.25 micrometers CMOS technology. Its architecture is reviewed to provide a better understanding of the impact of single event upset (SEU) on the device during operation. The SEU effect of different memories available on the device is evaluated. Heavy ion test data and SPICE simulations are used integrally to extract the threshold LET (linear energy transfer). Together with the saturation cross-section measurement from the layout, a rate prediction is done on each memory type. The SEU in the configuration SRAM is identified as the dominant failure mode and is discussed in detail. The single event transient error in combinational logic is also investigated and simulated by SPICE. SEU mitigation by hardening the memories and employing EDAC (error detection and correction) at the device level are presented. For the configuration SRAM (CSRAM) cell, the trade-off between resistor de-coupling and redundancy hardening techniques are investigated with interesting results. Preliminary heavy ion test data show no sign of SEL (single event latch-up). With regard to ionizing radiation effects, the increase in static leakage current (static I(sub CC)) measured indicates a device tolerance of approximately 50krad(Si).
Charge deposition model for investigating SE-microdose effect in trench power MOSFETs
NASA Astrophysics Data System (ADS)
Xin, Wan; Weisong, Zhou; Daoguang, Liu; Hanliang, Bo; Jun, Xu
2015-05-01
It was demonstrated that heavy ions can induce large current—voltage (I-V) characteristics shift in commercial trench power MOSFETs, named single event microdose effect (SE-microdose effect). A model is presented to describe this effect. This model calculates the charge deposition by a single heavy ion hitting oxide and the subsequent charge transport under an electric field. Holes deposited at the SiO2/Si interface by a Xe ion are calculated by using this model. The calculated results were then used in Sentaurus TCAD software to simulate a trench power MOSFET's I-V curve shift after a Xe ion has hit it. The simulation results are consistent with the related experiment's data. In the end, several factors which affect the SE-microdose effect in trench power MOSFETs are investigated by using this model.
Butterworth, A; Ferrari, A; Tsoulou, E; Vlachoudis, V; Wijnands, T
2005-01-01
Monte Carlo simulations have been performed to estimate the radiation damage induced by high-energy hadrons in the digital electronics of the RF low-level systems in the LHC cavities. High-energy hadrons are generated when the proton beams interact with the residual gas. The contributions from various elements-vacuum chambers, cryogenic cavities, wideband pickups and cryomodule beam tubes-have been considered individually, with each contribution depending on the gas composition and density. The probability of displacement damage and single event effects (mainly single event upsets) is derived for the LHC start-up conditions.
Li, Zhongwu; Huang, Jinquan; Zeng, Guangming; Nie, Xiaodong; Ma, Wenming; Yu, Wei; Guo, Wang; Zhang, Jiachao
2013-01-01
The effects of water erosion (including long-term historical erosion and single erosion event) on soil properties and productivity in different farming systems were investigated. A typical sloping cropland with homogeneous soil properties was designed in 2009 and then protected from other external disturbances except natural water erosion. In 2012, this cropland was divided in three equally sized blocks. Three treatments were performed on these blocks with different simulated rainfall intensities and farming methods: (1) high rainfall intensity (1.5 - 1.7 mm min−1), no-tillage operation; (2) low rainfall intensity (0.5 - 0.7 mm min−1), no-tillage operation; and (3) low rainfall intensity, tillage operation. All of the blocks were divided in five equally sized subplots along the slope to characterize the three-year effects of historical erosion quantitatively. Redundancy analysis showed that the effects of long-term historical erosion significantly caused most of the variations in soil productivity in no-tillage and low rainfall erosion intensity systems. The intensities of the simulated rainfall did not exhibit significant effects on soil productivity in no-tillage systems. By contrast, different farming operations induced a statistical difference in soil productivity at the same single erosion intensity. Soil organic carbon (SOC) was the major limiting variable that influenced soil productivity. Most explanations of long-term historical erosion for the variation in soil productivity arose from its sharing with SOC. SOC, total nitrogen, and total phosphorus were found as the regressors of soil productivity because of tillage operation. In general, this study provided strong evidence that single erosion event could also impose significant constraints on soil productivity by integrating with tillage operation, although single erosion is not the dominant effect relative to the long-term historical erosion. Our study demonstrated that an effective management of organic carbon pool should be the preferred option to maintain soil productivity in subtropical red soil hilly region. PMID:24147090
Sanders, Duncan A; Swift, Michael R; Bowley, R M; King, P J
2004-11-12
We present event-driven simulation results for single and multiple intruders in a vertically vibrated granular bed. Under our vibratory conditions, the mean vertical position of a single intruder is governed primarily by a buoyancylike effect. Multiple intruders also exhibit buoyancy governed behavior; however, multiple neutrally buoyant intruders cluster spontaneously and undergo horizontal segregation. These effects can be understood by considering the dynamics of two neutrally buoyant intruders. We have measured an attractive force between such intruders which has a range of five intruder diameters, and we provide a mechanistic explanation for the origins of this force.
Jung, Seungwon; Cha, Misun; Park, Jiyong; Jeong, Namjo; Kim, Gunn; Park, Changwon; Ihm, Jisoon; Lee, Junghoon
2010-08-18
It has been known that single-strand DNA wraps around a single-walled carbon nanotube (SWNT) by pi-stacking. In this paper it is demonstrated that such DNA is dissociated from the SWNT by Watson-Crick base-pairing with a complementary sequence. Measurement of field effect transistor characteristics indicates a shift of the electrical properties as a result of this "unwrapping" event. We further confirm the suggested process through Raman spectroscopy and gel electrophoresis. Experimental results are verified in view of atomistic mechanisms with molecular dynamics simulations and binding energy analyses.
PDSOI and Radiation Effects: An Overview
NASA Technical Reports Server (NTRS)
Forgione, Joshua B.
2005-01-01
Bulk silicon substrates are a common characteristic of nearly all commercial, Complementary Metal-Oxide-Semiconductor (CMOS), integrated circuits. These devices operate well on Earth, but are not so well received in the space environment. An alternative to bulk CMOS is the Silicon-On-Insulator (SOI), in which a &electric isolates the device layer from the substrate. SO1 behavior in the space environment has certain inherent advantages over bulk, a primary factor in its long-time appeal to space-flight IC designers. The discussion will investigate the behavior of the Partially-Depleted SO1 (PDSOI) device with respect to some of the more common space radiation effects: Total Ionized Dose (TID), Single-Event Upsets (SEUs), and Single-Event Latchup (SEL). Test and simulation results from the literature, bulk and epitaxial comparisons facilitate reinforcement of PDSOI radiation characteristics.
Effects of electronic excitation on cascade dynamics in nickel–iron and nickel–palladium systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zarkadoula, Eva; Samolyuk, German; Weber, William J.
Using molecular dynamics simulations and the two-temperature model, we provide in this paper a comparison of the surviving damage from single ion irradiation events in nickel-based alloys, for cascades with and without taking into account the effects of the electronic excitations. We find that including the electronic effects impacts the amount of the resulting damage and the production of isolated defects. Finally, irradiation of nickel–palladium systems results in larger numbers of defects compared to nickel–iron systems, with similar numbers of isolated defects. We additionally investigate the mass effect on the two-temperature model in molecular dynamics simulations of cascades.
Effects of electronic excitation on cascade dynamics in nickel–iron and nickel–palladium systems
Zarkadoula, Eva; Samolyuk, German; Weber, William J.
2017-06-10
Using molecular dynamics simulations and the two-temperature model, we provide in this paper a comparison of the surviving damage from single ion irradiation events in nickel-based alloys, for cascades with and without taking into account the effects of the electronic excitations. We find that including the electronic effects impacts the amount of the resulting damage and the production of isolated defects. Finally, irradiation of nickel–palladium systems results in larger numbers of defects compared to nickel–iron systems, with similar numbers of isolated defects. We additionally investigate the mass effect on the two-temperature model in molecular dynamics simulations of cascades.
Bambini, Deborah; Emery, Matthew; de Voest, Margaret; Meny, Lisa; Shoemaker, Michael J.
2016-01-01
There are significant limitations among the few prior studies that have examined the development and implementation of interprofessional education (IPE) experiences to accommodate a high volume of students from several disciplines and from different institutions. The present study addressed these gaps by seeking to determine the extent to which a single, large, inter-institutional, and IPE simulation event improves student perceptions of the importance and relevance of IPE and simulation as a learning modality, whether there is a difference in students’ perceptions among disciplines, and whether the results are reproducible. A total of 290 medical, nursing, pharmacy, and physical therapy students participated in one of two large, inter-institutional, IPE simulation events. Measurements included student perceptions about their simulation experience using the Attitude Towards Teamwork in Training Undergoing Designed Educational Simulation (ATTITUDES) Questionnaire and open-ended questions related to teamwork and communication. Results demonstrated a statistically significant improvement across all ATTITUDES subscales, while time management, role confusion, collaboration, and mutual support emerged as significant themes. Results of the present study indicate that a single IPE simulation event can reproducibly result in significant and educationally meaningful improvements in student perceptions towards teamwork, IPE, and simulation as a learning modality. PMID:28970407
Assessing manure management strategies through small-plot research and whole-farm modeling
Garcia, A.M.; Veith, T.L.; Kleinman, P.J.A.; Rotz, C.A.; Saporito, L.S.
2008-01-01
Plot-scale experimentation can provide valuable insight into the effects of manure management practices on phosphorus (P) runoff, but whole-farm evaluation is needed for complete assessment of potential trade offs. Artificially-applied rainfall experimentation on small field plots and event-based and long-term simulation modeling were used to compare P loss in runoff related to two dairy manure application methods (surface application with and without incorporation by tillage) on contrasting Pennsylvania soils previously under no-till management. Results of single-event rainfall experiments indicated that average dissolved reactive P losses in runoff from manured plots decreased by up to 90% with manure incorporation while total P losses did not change significantly. Longer-term whole farm simulation modeling indicated that average dissolved reactive P losses would decrease by 8% with manure incorporation while total P losses would increase by 77% due to greater erosion from fields previously under no-till. Differences in the two methods of inference point to the need for caution in extrapolating research findings. Single-event rainfall experiments conducted shortly after manure application simulate incidental transfers of dissolved P in manure to runoff, resulting in greater losses of dissolved reactive P. However, the transfer of dissolved P in applied manure diminishes with time. Over the annual time frame simulated by whole farm modeling, erosion processes become more important to runoff P losses. Results of this study highlight the need to consider the potential for increased erosion and total P losses caused by soil disturbance during incorporation. This study emphasizes the ability of modeling to estimate management practice effectiveness at the larger scales when experimental data is not available.
NASA Astrophysics Data System (ADS)
Shao, W.; Bogaard, T.; Bakker, M.; Berti, M.; Savenije, H. H. G.
2016-12-01
The fast pore water pressure response to rain events is an important triggering factor for slope instability. The fast pressure response may be caused by preferential flow that bypasses the soil matrix. Currently, most of the hydro-mechanical models simulate pore water pressure using a single-permeability model, which cannot quantify the effects of preferential flow on pressure propagation and landslide triggering. Previous studies showed that a model based on the linear-diffusion equation can simulate the fast pressure propagation in near-saturated landslides such as the Rocca Pitigliana landslide. In such a model, the diffusion coefficient depends on the degree of saturation, which makes it difficult to use the model for predictions. In this study, the influence of preferential flow on pressure propagation and slope stability is investigated with a 1D dual-permeability model coupled with an infinite-slope stability approach. The dual-permeability model uses two modified Darcy-Richards equations to simultaneously simulate the matrix flow and preferential flow in hillslopes. The simulated pressure head is used in an infinite-slope stability analysis to identify the influence of preferential flow on the fast pressure response and landslide triggering. The dual-permeability model simulates the height and arrival of the pressure peak reasonably well. Performance of the dual-permeability model is as good as or better than the linear-diffusion model even though the dual-permeability model is calibrated for two single pulse rain events only, while the linear-diffusion model is calibrated for each rain event separately.
Song, Rui; Kosorok, Michael R.; Cai, Jianwen
2009-01-01
Summary Recurrent events data are frequently encountered in clinical trials. This article develops robust covariate-adjusted log-rank statistics applied to recurrent events data with arbitrary numbers of events under independent censoring and the corresponding sample size formula. The proposed log-rank tests are robust with respect to different data-generating processes and are adjusted for predictive covariates. It reduces to the Kong and Slud (1997, Biometrika 84, 847–862) setting in the case of a single event. The sample size formula is derived based on the asymptotic normality of the covariate-adjusted log-rank statistics under certain local alternatives and a working model for baseline covariates in the recurrent event data context. When the effect size is small and the baseline covariates do not contain significant information about event times, it reduces to the same form as that of Schoenfeld (1983, Biometrics 39, 499–503) for cases of a single event or independent event times within a subject. We carry out simulations to study the control of type I error and the comparison of powers between several methods in finite samples. The proposed sample size formula is illustrated using data from an rhDNase study. PMID:18162107
Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber
NASA Astrophysics Data System (ADS)
Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.; Bagby, L.; Baller, B.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Bugel, L.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; James, C.; de Vries, J. Jan; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Jones, B. J. P.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; von Rohr, C. Rudolf; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van de Water, R. G.; Viren, B.; Weber, M.; Weston, J.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Zeller, G. P.; Zennamo, J.; Zhang, C.
2017-03-01
We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.
Single-Event Transient Response of Comparator Pre-Amplifiers in a Complementary SiGe Technology
NASA Astrophysics Data System (ADS)
Ildefonso, Adrian; Lourenco, Nelson E.; Fleetwood, Zachary E.; Wachter, Mason T.; Tzintzarov, George N.; Cardoso, Adilson S.; Roche, Nicolas J.-H.; Khachatrian, Ani; McMorrow, Dale; Buchner, Stephen P.; Warner, Jeffrey H.; Paki, Pauline; Kaynak, Mehmet; Tillack, Bernd; Cressler, John D.
2017-01-01
The single-event transient (SET) response of the pre-amplification stage of two latched comparators designed using either npn or pnp silicon-germanium heterojunction bipolar transistors (SiGe HBTs) is investigated via two-photon absorption (TPA) carrier injection and mixed-mode TCAD simulations. Experimental data and TCAD simulations showed an improved SET response for the pnp comparator circuit. 2-D raster scans revealed that the devices in the pnp circuit exhibit a reduction in sensitive area of up to 80% compared to their npn counterparts. In addition, by sweeping the input voltage, the sensitive operating region with respect to SETs was determined. By establishing a figure-of-merit, relating the transient peaks and input voltage polarities, the pnp device was determined to have a 21.4% improved response with respect to input voltage. This study has shown that using pnp devices is an effective way to mitigate SETs, and could enable further radiation-hardening-by-design techniques.
Medium-energy heavy-ion single-event-burnout imaging of power MOSFETs
NASA Astrophysics Data System (ADS)
Musseau, O.; Torres, A.; Campbell, A. B.; Knudson, A. R.; Buchner, S.; Fischer, B.; Schlogl, M.; Briand, P.
1999-12-01
We present the first experimental determination of the SEB sensitive area in a power MOSFET irradiated with a high-LET heavy-ion microbeam. We used a spectroscopy technique to perform coincident measurements of the charge collected in both source and drain junctions together, with a nondestructive technique (current limitation). The resulting charge collection images are related to the physical structure of the individual cells. These experimental data reveal the complex 3-dimensional behavior of a real structure, which can not easily be simulated using available tools. As the drain voltage is increased, the onset of burnout is reached, characterized by a sudden change in the charge collection image. "Hot spots" are observed where the collected charge reaches its maximum value. Those spots, due to burnout triggering events, correspond to areas where the silicon is degraded through thermal effects along a single ion track. This direct observation of SEB sensitive areas as applications for, either device hardening, by modifying doping profiles or layout of the cells, or for code calibration and device simulation.
NASA Technical Reports Server (NTRS)
Koontz, Steve; Atwell, William; Reddell, Brandon; Rojdev, Kristina
2010-01-01
Analysis of both satellite and surface neutron monitor data demonstrate that the widely utilized Exponential model of solar particle event (SPE) proton kinetic energy spectra can seriously underestimate SPE proton flux, especially at the highest kinetic energies. The more recently developed Band model produces better agreement with neutron monitor data ground level events (GLEs) and is believed to be considerably more accurate at high kinetic energies. Here, we report the results of modeling and simulation studies in which the radiation transport code FLUKA (FLUktuierende KAskade) is used to determine the changes in total ionizing dose (TID) and single-event environments (SEE) behind aluminum, polyethylene, carbon, and titanium shielding masses when the assumed form (i. e., Band or Exponential) of the solar particle event (SPE) kinetic energy spectra is changed. FLUKA simulations have fully three dimensions with an isotropic particle flux incident on a concentric spherical shell shielding mass and detector structure. The effects are reported for both energetic primary protons penetrating the shield mass and secondary particle showers caused by energetic primary protons colliding with shielding mass nuclei. Our results, in agreement with previous studies, show that use of the Exponential form of the event
Yu, Pengtao; Wang, Yanhui; Coles, Neil; Xiong, Wei; Xu, Lihong
2015-01-01
The "Grain for Green Project" is a country-wide ecological program to converse marginal cropland to forest, which has been implemented in China since 2002. To quantify influence of this significant vegetation change, Guansihe Hydrological (GSH) Model, a validated physically-based distributed hydrological model, was applied to simulate runoff responses to land use change in the Guansihe watershed that is located in the upper reaches of the Yangtze River basin in Southwestern China with an area of only 21.1 km2. Runoff responses to two single rainfall events, 90 mm and 206 mm respectively, were simulated for 16 scenarios of cropland to forest conversion. The model simulations indicated that the total runoff generated after conversion to forest was strongly dependent on whether the land was initially used for dry croplands without standing water in fields or constructed (or walled) paddy fields. The simulated total runoff generated from the two rainfall events displayed limited variation for the conversion of dry croplands to forest, while it strongly decreased after paddy fields were converted to forest. The effect of paddy terraces on runoff generation was dependent on the rainfall characteristics and antecedent moisture (or saturation) conditions in the fields. The reduction in simulated runoff generated from intense rainfall events suggested that afforestation and terracing might be effective in managing runoff and had the potential to mitigate flooding in southwestern China. PMID:26192181
Scaling and Single Event Effects (SEE) Sensitivity
NASA Technical Reports Server (NTRS)
Oldham, Timothy R.
2003-01-01
This paper begins by discussing the potential for scaling down transistors and other components to fit more of them on chips in order to increasing computer processing speed. It also addresses technical challenges to further scaling. Components have been scaled down enough to allow single particles to have an effect, known as a Single Event Effect (SEE). This paper explores the relationship between scaling and the following SEEs: Single Event Upsets (SEU) on DRAMs and SRAMs, Latch-up, Snap-back, Single Event Burnout (SEB), Single Event Gate Rupture (SEGR), and Ion-induced soft breakdown (SBD).
NASA Technical Reports Server (NTRS)
Carr, Mary-Elena
1998-01-01
A size-based ecosystem model was modified to include periodic upwelling events and used to evaluate the effect of episodic nutrient supply on the standing stock, carbon uptake, and carbon flow into mesozooplankton grazing and sinking flux in a coastal upwelling regime. Two ecosystem configurations were compared: a single food chain made up of net phytoplankton and mesozooplankton (one autotroph and one heterotroph, A1H1), and three interconnected food chains plus bacteria (three autotrophs and four heterotrophs, A3H4). The carbon pathways in the A1H1 simulations were under stronger physical control than those of the A3H4 runs, where the small size classes are not affected by frequent upwelling events. In the more complex food web simulations, the microbial pathway determines the total carbon uptake and grazing rates, and regenerated nitrogen accounts for more than half of the total primary production for periods of 20 days or longer between events. By contrast, new production, export of carbon through sinking and mesozooplankton grazing are more important in the A1H1 simulations. In the A3H4 simulations, the turnover time scale of the autotroph biomass increases as the period between upwelling events increases, because of the larger contribution of slow-growing net phytoplankton. The upwelling period was characterized for three upwelling sites from the alongshore wind speed measured by the NASA Scatterometer (NSCAT) and the corresponding model output compared with literature data. This validation exercise for three upwelling sites and a downstream embayment suggests that standing stock, carbon uptake and size fractionation were best supported by the A3H4 simulations, while the simulated sinking fluxes are not distinguishable in the two configurations.
Comparison of hybrid and pure Monte Carlo shower generators on an event by event basis
NASA Astrophysics Data System (ADS)
Allen, J.; Drescher, H.-J.; Farrar, G.
SENECA is a hybrid air shower simulation written by H. Drescher that utilizes both Monte Carlo simulation and cascade equations. By using the cascade equations only in the high energy portion of the shower, where they are extremely accurate, SENECA is able to utilize the advantages in speed from the cascade equations yet still produce complete, three dimensional particle distributions at ground level. We present a comparison, on an event by event basis, of SENECA and CORSIKA, a well trusted MC simulation. By using the same first interaction in both SENECA and CORSIKA, the effect of the cascade equations can be studied within a single shower, rather than averages over many showers. Our study shows that for showers produced in this manner, SENECA agrees with CORSIKA to a very high accuracy as to densities, energies, and timing information for individual species of ground-level particles from both iron and proton primaries with energies between 1EeV and 100EeV. Used properly, SENECA produces ground particle distributions virtually indistinguishable from those of CORSIKA in a fraction of the time. For example, for a shower induced by a 40 EeV proton simulated with 10-6 thinning, SENECA is 10 times faster than CORSIKA.
Classifier for gravitational-wave inspiral signals in nonideal single-detector data
NASA Astrophysics Data System (ADS)
Kapadia, S. J.; Dent, T.; Dal Canton, T.
2017-11-01
We describe a multivariate classifier for candidate events in a templated search for gravitational-wave (GW) inspiral signals from neutron-star-black-hole (NS-BH) binaries, in data from ground-based detectors where sensitivity is limited by non-Gaussian noise transients. The standard signal-to-noise ratio (SNR) and chi-squared test for inspiral searches use only properties of a single matched filter at the time of an event; instead, we propose a classifier using features derived from a bank of inspiral templates around the time of each event, and also from a search using approximate sine-Gaussian templates. The classifier thus extracts additional information from strain data to discriminate inspiral signals from noise transients. We evaluate a random forest classifier on a set of single-detector events obtained from realistic simulated advanced LIGO data, using simulated NS-BH signals added to the data. The new classifier detects a factor of 1.5-2 more signals at low false positive rates as compared to the standard "reweighted SNR" statistic, and does not require the chi-squared test to be computed. Conversely, if only the SNR and chi-squared values of single-detector events are available, random forest classification performs nearly identically to the reweighted SNR.
Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber
Acciarri, R.; Adams, C.; An, R.; ...
2017-03-14
Here, we present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. Lastly, we also address technical issues that arise when applying this technique to data from a large LArTPCmore » at or near ground level.« less
The Ongoing and Open-Ended Simulation
ERIC Educational Resources Information Center
Cohen, Alexander
2016-01-01
This case study explores a novel form of classroom simulation that differs from published examples in two important respects. First, it is ongoing. While most simulations represent a single learning episode embedded within a course, the ongoing simulation is a continuous set of interrelated events and decisions that accompany learning throughout…
Evaluation of PET Imaging Resolution Using 350 mu{m} Pixelated CZT as a VP-PET Insert Detector
NASA Astrophysics Data System (ADS)
Yin, Yongzhi; Chen, Ximeng; Li, Chongzheng; Wu, Heyu; Komarov, Sergey; Guo, Qingzhen; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan
2014-02-01
A cadmium-zinc-telluride (CZT) detector with 350 μm pitch pixels was studied in high-resolution positron emission tomography (PET) imaging applications. The PET imaging system was based on coincidence detection between a CZT detector and a lutetium oxyorthosilicate (LSO)-based Inveon PET detector in virtual-pinhole PET geometry. The LSO detector is a 20 ×20 array, with 1.6 mm pitches, and 10 mm thickness. The CZT detector uses ac 20 ×20 ×5 mm substrate, with 350 μm pitch pixelated anodes and a coplanar cathode. A NEMA NU4 Na-22 point source of 250 μm in diameter was imaged by this system. Experiments show that the image resolution of single-pixel photopeak events was 590 μm FWHM while the image resolution of double-pixel photopeak events was 640 μm FWHM. The inclusion of double-pixel full-energy events increased the sensitivity of the imaging system. To validate the imaging experiment, we conducted a Monte Carlo (MC) simulation for the same PET system in Geant4 Application for Emission Tomography. We defined LSO detectors as a scanner ring and 350 μm pixelated CZT detectors as an insert ring. GATE simulated coincidence data were sorted into an insert-scanner sinogram and reconstructed. The image resolution of MC-simulated data (which did not factor in positron range and acolinearity effect) was 460 μm at FWHM for single-pixel events. The image resolutions of experimental data, MC simulated data, and theoretical calculation are all close to 500 μm FWHM when the proposed 350 μm pixelated CZT detector is used as a PET insert. The interpolation algorithm for the charge sharing events was also investigated. The PET image that was reconstructed using the interpolation algorithm shows improved image resolution compared with the image resolution without interpolation algorithm.
NASA Astrophysics Data System (ADS)
Shao, Wei; Bogaard, Thom; Bakker, Mark; Berti, Matteo
2016-12-01
The fast pore water pressure response to rain events is an important triggering factor for slope instability. The fast pressure response may be caused by preferential flow that bypasses the soil matrix. Currently, most of the hydro-mechanical models simulate pore water pressure using a single-permeability model, which cannot quantify the effects of preferential flow on pressure propagation and landslide triggering. Previous studies showed that a model based on the linear-diffusion equation can simulate the fast pressure propagation in near-saturated landslides such as the Rocca Pitigliana landslide. In such a model, the diffusion coefficient depends on the degree of saturation, which makes it difficult to use the model for predictions. In this study, the influence of preferential flow on pressure propagation and slope stability is investigated with a 1D dual-permeability model coupled with an infinite-slope stability approach. The dual-permeability model uses two modified Darcy-Richards equations to simultaneously simulate the matrix flow and preferential flow in hillslopes. The simulated pressure head is used in an infinite-slope stability analysis to identify the influence of preferential flow on the fast pressure response and landslide triggering. The dual-permeability model simulates the height and arrival of the pressure peak reasonably well. Performance of the dual-permeability model is as good as or better than the linear-diffusion model even though the dual-permeability model is calibrated for two single pulse rain events only, while the linear-diffusion model is calibrated for each rain event separately. In conclusion, the 1D dual-permeability model is a promising tool for landslides under similar conditions.
Variability of simulants used in recreating stab events.
Carr, D J; Wainwright, A
2011-07-15
Forensic investigators commonly use simulants/backing materials to mount fabrics and/or garments on when recreating damage due to stab events. Such work may be conducted in support of an investigation to connect a particular knife to a stabbing event by comparing the severance morphology obtained in the laboratory to that observed in the incident. There does not appear to have been a comparison of the effect of simulant type on the morphology of severances in fabrics and simulants, nor on the variability of simulants. This work investigates three simulants (pork, gelatine, expanded polystyrene), two knife blades (carving, bread), and how severances in the simulants and an apparel fabric typically used to manufacture T-shirts (single jersey) were affected by (i) simulant type and (ii) blade type. Severances were formed using a laboratory impact apparatus to ensure a consistent impact velocity and hence impact energy independently of the other variables. The impact velocity was chosen so that the force measured was similar to that measured in human performance trials. Force-time and energy-time curves were analysed and severance morphology (y, z directions) investigated. Simulant type and knife type significantly affected the critical forensic measurements of severance length (y direction) in the fabric and 'skin' (Tuftane). The use of EPS resulted in the lowest variability in data, further the severances recorded in both the fabric and Tuftane more accurately reflected the dimensions of the impacting knives. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
LaBel, Kenneth A.; OBryan, Martha V.; Chen, Dakai; Campola, Michael J.; Casey, Megan C.; Pellish, Jonathan A.; Lauenstein, Jean-Marie; Wilcox, Edward P.; Topper, Alyson D.; Ladbury, Raymond L.;
2014-01-01
We present results and analysis investigating the effects of radiation on a variety of candidate spacecraft electronics to proton and heavy ion induced single event effects (SEE), proton-induced displacement damage (DD), and total ionizing dose (TID). Introduction: This paper is a summary of test results.NASA spacecraft are subjected to a harsh space environment that includes exposure to various types of ionizing radiation. The performance of electronic devices in a space radiation environment is often limited by its susceptibility to single event effects (SEE), total ionizing dose (TID), and displacement damage (DD). Ground-based testing is used to evaluate candidate spacecraft electronics to determine risk to spaceflight applications. Interpreting the results of radiation testing of complex devices is quite difficult. Given the rapidly changing nature of technology, radiation test data are most often application-specific and adequate understanding of the test conditions is critical. Studies discussed herein were undertaken to establish the application-specific sensitivities of candidate spacecraft and emerging electronic devices to single-event upset (SEU), single-event latchup (SEL), single-event gate rupture (SEGR), single-event burnout (SEB), single-event transient (SET), TID, enhanced low dose rate sensitivity (ELDRS), and DD effects.
Microdose Induced Drain Leakage Effects in Power Trench MOSFETs: Experiment and Modeling
NASA Astrophysics Data System (ADS)
Zebrev, Gennady I.; Vatuev, Alexander S.; Useinov, Rustem G.; Emeliyanov, Vladimir V.; Anashin, Vasily S.; Gorbunov, Maxim S.; Turin, Valentin O.; Yesenkov, Kirill A.
2014-08-01
We study experimentally and theoretically the micro-dose induced drain-source leakage current in the trench power MOSFETs under irradiation with high-LET heavy ions. We found experimentally that cumulative increase of leakage current occurs by means of stochastic spikes corresponding to a strike of single heavy ion into the MOSFET gate oxide. We simulate this effect with the proposed analytic model allowing to describe (including Monte Carlo methods) both the deterministic (cumulative dose) and stochastic (single event) aspects of the problem. Based on this model the survival probability assessment in space heavy ion environment with high LETs was proposed.
Application of RADSAFE to Model Single Event Upset Response of a 0.25 micron CMOS SRAM
NASA Technical Reports Server (NTRS)
Warren, Kevin M.; Weller, Robert A.; Sierawski, Brian; Reed, Robert A.; Mendenhall, Marcus H.; Schrimpf, Ronald D.; Massengill, Lloyd; Porter, Mark; Wilkerson, Jeff; LaBel, Kenneth A.;
2006-01-01
The RADSAFE simulation framework is described and applied to model Single Event Upsets (SEU) in a 0.25 micron CMOS 4Mbit Static Random Access Memory (SRAM). For this circuit, the RADSAFE approach produces trends similar to those expected from classical models, but more closely represents the physical mechanisms responsible for SEU in the SRAM circuit.
Single Event Effect Testing of the Micron MT46V128M8
NASA Technical Reports Server (NTRS)
Stansberry, Scott; Campola, Michael; Wilcox, Ted; Seidleck, Christina; Phan, Anthony
2017-01-01
The Micron MT46V128M8 was tested for single event effects (SEE) at the Texas AM University Cyclotron Facility (TAMU) in June of 2017. Testing revealed a sensitivity to device hang-ups classified as single event functional interrupts (SEFI) and possible soft data errors classified as single event upsets (SEU).
Dual Interlocked Logic for Single-Event Transient Mitigation
2017-03-01
SPICE simulation and fault-injection analysis. Exemplar SPICE simulations have been performed in a 32nm partially- depleted silicon-on-insulator...in this work. The model has been validated at the 32nm SOI technology node with extensive heavy-ion data [7]. For the SPICE simulations, three
NASA Astrophysics Data System (ADS)
Benjamin, J.; Rosser, N. J.; Dunning, S.; Hardy, R. J.; Karim, K.; Szczucinski, W.; Norman, E. C.; Strzelecki, M.; Drewniak, M.
2014-12-01
Risk assessments of the threat posed by rock avalanches rely upon numerical modelling of potential run-out and spreading, and are contingent upon a thorough understanding of the flow dynamics inferred from deposits left by previous events. Few records exist of multiple rock avalanches with boundary conditions sufficiently consistent to develop a set of more generalised rules for behaviour across events. A unique cluster of 20 large (3 x 106 - 94 x 106 m3) rock avalanche deposits along the Vaigat Strait, West Greenland, offers a unique opportunity to model a large sample of adjacent events sourced from a stretch of coastal mountains of relatively uniform geology and structure. Our simulations of these events were performed using VolcFlow, a geophysical mass flow code developed to simulate volcanic debris avalanches. Rheological calibration of the model was performed using a well-constrained event at Paatuut (AD 2000). The best-fit simulation assumes a constant retarding stress with a collisional stress coefficient (T0 = 250 kPa, ξ = 0.01), and simulates run-out to within ±0.3% of that observed. Despite being widely used to simulate rock avalanche propagation, other models, that assume either a Coulomb frictional or a Voellmy rheology, failed to reproduce the observed event characteristics and deposit distribution at Paatuut. We applied this calibration to 19 other events, simulating rock avalanche motion across 3D terrain of varying levels of complexity. Our findings illustrate the utility and sensitivity of modelling a single rock avalanche satisfactorily as a function of rheology, alongside the validity of applying the same parameters elsewhere, even within similar boundary conditions. VolcFlow can plausibly account for the observed morphology of a series of deposits emplaced by events of different types, although its performance is sensitive to a range of topographic and geometric factors. These exercises show encouraging results in the model's ability to simulate a series of events using a single set of parameters obtained by back-analysis of the Paatuut event alone. The results also hold important implications for our process understanding of rock avalanches in confined fjord settings, where correctly modelling material flux at the point of entry into the water is critical in tsunami generation.
Effects of space radiation on electronic microcircuits
NASA Technical Reports Server (NTRS)
Kolasinski, W. A.
1989-01-01
The single event effects or phenomena (SEP), which so far have been observed as events falling on one or another of the SE classes: Single Event Upset (SEU), Single Event Latchup (SEL) and Single Event Burnout (SEB), are examined. Single event upset is defined as a lasting, reversible change in the state of a multistable (usually bistable) electronic circuit such as a flip-flop or latch. In a computer memory, SEUs manifest themselves as unexplained bit flips. Since latchup is in general caused by a single event of short duration, the single event part of the SEL term is superfluous. Nevertheless, it is used customarily to differentiate latchup due to a single heavy charged particle striking a sensitive cell from more ordinary kinds of latchup. Single event burnout (SEB) refers usually to total instantaneous failure of a power FET when struck by a single particle, with the device shorting out the power supply. An unforeseen failure of these kinds can be catastrophic to a space mission, and the possibilities are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acciarri, R.; Adams, C.; An, R.
Here, we present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. Lastly, we also address technical issues that arise when applying this technique to data from a large LArTPCmore » at or near ground level.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acciarri, R.; Adams, C.; An, R.
We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at ormore » near ground level.« less
NASA Technical Reports Server (NTRS)
Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.; Thieberger, P.; Wegner, H. E.
1985-01-01
Single-Event Upset (SEU) response of a bipolar low-power Schottky-diode-clamped TTL static RAM has been observed using Br ions in the 100-240 MeV energy range and O ions in the 20-100 MeV range. These data complete the experimental verification of circuit-simulation SEU modeling for this device. The threshold for onset of SEU has been observed by the variation of energy, ion species and angle of incidence. The results obtained from the computer circuit-simulation modeling and experimental model verification demonstrate a viable methodology for modeling SEU in bipolar integrated circuits.
Gatidis, Sergios; Würslin, Christian; Seith, Ferdinand; Schäfer, Jürgen F; la Fougère, Christian; Nikolaou, Konstantin; Schwenzer, Nina F; Schmidt, Holger
2016-01-01
Optimization of tracer dose regimes in positron emission tomography (PET) imaging is a trade-off between diagnostic image quality and radiation exposure. The challenge lies in defining minimal tracer doses that still result in sufficient diagnostic image quality. In order to find such minimal doses, it would be useful to simulate tracer dose reduction as this would enable to study the effects of tracer dose reduction on image quality in single patients without repeated injections of different amounts of tracer. The aim of our study was to introduce and validate a method for simulation of low-dose PET images enabling direct comparison of different tracer doses in single patients and under constant influencing factors. (18)F-fluoride PET data were acquired on a combined PET/magnetic resonance imaging (MRI) scanner. PET data were stored together with the temporal information of the occurrence of single events (list-mode format). A predefined proportion of PET events were then randomly deleted resulting in undersampled PET data. These data sets were subsequently reconstructed resulting in simulated low-dose PET images (retrospective undersampling of list-mode data). This approach was validated in phantom experiments by visual inspection and by comparison of PET quality metrics contrast recovery coefficient (CRC), background-variability (BV) and signal-to-noise ratio (SNR) of measured and simulated PET images for different activity concentrations. In addition, reduced-dose PET images of a clinical (18)F-FDG PET dataset were simulated using the proposed approach. (18)F-PET image quality degraded with decreasing activity concentrations with comparable visual image characteristics in measured and in corresponding simulated PET images. This result was confirmed by quantification of image quality metrics. CRC, SNR and BV showed concordant behavior with decreasing activity concentrations for measured and for corresponding simulated PET images. Simulation of dose-reduced datasets based on clinical (18)F-FDG PET data demonstrated the clinical applicability of the proposed data. Simulation of PET tracer dose reduction is possible with retrospective undersampling of list-mode data. Resulting simulated low-dose images have equivalent characteristics with PET images actually measured at lower doses and can be used to derive optimal tracer dose regimes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, Shi; Qian, Yun; Zhao, Chun
Convection-resolving ensemble simulations using the WRF-Chem model coupled with a single-layer Urban Canopy Model (UCM) are conducted to investigate the individual and combined impacts of land use and anthropogenic pollutant emissions from urbanization on a heavy rainfall event in the Greater Beijing Metropolitan Area (GBMA) in China. The simulation with the urbanization effect included generally captures the spatial pattern and temporal variation of the rainfall event. An improvement of precipitation is found in the experiment including aerosol effect on both clouds and radiation. The expanded urban land cover and increased aerosols have an opposite effect on precipitation processes, with themore » latter playing a more dominant role, leading to suppressed convection and rainfall over the upstream (northwest) area, and enhanced convection and more precipitation in the downstream (southeast) region of the GBMA. In addition, the influence of aerosol indirect effect is found to overwhelm that of direct effect on precipitation in this rainfall event. Increased aerosols induce more cloud droplets with smaller size, which favors evaporative cooling and reduce updrafts and suppress convection over the upstream (northwest) region in the early stage of the rainfall event. As the rainfall system propagates southeastward, more latent heat is released due to the freezing of larger number of smaller cloud drops that are lofted above the freezing level, which is responsible for the increased updraft strength and convective invigoration over the downstream (southeast) area.« less
Improving Aircraft Refueling Procedures at Naval Air Station Oceana
2012-06-01
Station (NAS) Oceana, VA, using aircraft waiting time for fuel as a measure of performance. We develop a computer-assisted discrete-event simulation to...Station (NAS) Oceana, VA, using aircraft waiting time for fuel as a measure of performance. We develop a computer-assisted discrete-event simulation...server queue, with general interarrival and service time distributions gpm Gallons per minute JDK Java development kit M/M/1 Single-server queue
Single-event burnout hardening of planar power MOSFET with partially widened trench source
NASA Astrophysics Data System (ADS)
Lu, Jiang; Liu, Hainan; Cai, Xiaowu; Luo, Jiajun; Li, Bo; Li, Binhong; Wang, Lixin; Han, Zhengsheng
2018-03-01
We present a single-event burnout (SEB) hardened planar power MOSFET with partially widened trench sources by three-dimensional (3D) numerical simulation. The advantage of the proposed structure is that the work of the parasitic bipolar transistor inherited in the power MOSFET is suppressed effectively due to the elimination of the most sensitive region (P-well region below the N+ source). The simulation result shows that the proposed structure can enhance the SEB survivability significantly. The critical value of linear energy transfer (LET), which indicates the maximum deposited energy on the device without SEB behavior, increases from 0.06 to 0.7 pC/μm. The SEB threshold voltage increases to 120 V, which is 80% of the rated breakdown voltage. Meanwhile, the main parameter characteristics of the proposed structure remain similar with those of the conventional planar structure. Therefore, this structure offers a potential optimization path to planar power MOSFET with high SEB survivability for space and atmospheric applications. Project supported by the National Natural Science Foundation of China (Nos. 61404161, 61404068, 61404169).
Medium-energy heavy-ion single-event-burnout imaging of power MOSFETs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Musseau, O.; Torres, A.; Campbell, A.B.
The authors present the first experimental determination of the SEB sensitive area in a power MOSFET irradiated with a high-LET heavy-ion microbeam. They used a spectroscopy technique to perform coincident measurements of the charge collected in both source and drain junctions together, with a non-destructive technique (current limitation). The resulting charge collection images are related to the physical structure of the individual cells. These experimental data reveal the complex 3-dimensional behavior of a real structure, which can not easily be simulated using available tools. As the drain voltage is increased, the onset of burnout is reached, characterized by a suddenmore » change in the charge collection image. Hot spots are observed where the collected charge reaches its maximum value. Those spots, due to burnout triggering events, correspond to areas where the silicon is degraded through thermal effects along a single ion track. This direct observation of SEB sensitive areas as applications for, either device hardening, by modifying doping profiles or layout of the cells, or for code calibration and device simulation.« less
Studies Of Single-Event-Upset Models
NASA Technical Reports Server (NTRS)
Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.
1988-01-01
Report presents latest in series of investigations of "soft" bit errors known as single-event upsets (SEU). In this investigation, SEU response of low-power, Schottky-diode-clamped, transistor/transistor-logic (TTL) static random-access memory (RAM) observed during irradiation by Br and O ions in ranges of 100 to 240 and 20 to 100 MeV, respectively. Experimental data complete verification of computer model used to simulate SEU in this circuit.
Single-Event Effect Testing of the Linear Technology LTC6103HMS8#PBF Current Sense Amplifier
NASA Technical Reports Server (NTRS)
Yau, Ka-Yen; Campola, Michael J.; Wilcox, Edward
2016-01-01
The LTC6103HMS8#PBF (henceforth abbreviated as LTC6103) current sense amplifier from Linear Technology was tested for both destructive and non-destructive single-event effects (SEE) using the heavy-ion cyclotron accelerator beam at Lawrence Berkeley National Laboratory (LBNL) Berkeley Accelerator Effects (BASE) facility. During testing, the input voltages and output currents were monitored to detect single event latch-up (SEL) and single-event transients (SETs).
Multithreaded Stochastic PDES for Reactions and Diffusions in Neurons.
Lin, Zhongwei; Tropper, Carl; Mcdougal, Robert A; Patoary, Mohammand Nazrul Ishlam; Lytton, William W; Yao, Yiping; Hines, Michael L
2017-07-01
Cells exhibit stochastic behavior when the number of molecules is small. Hence a stochastic reaction-diffusion simulator capable of working at scale can provide a more accurate view of molecular dynamics within the cell. This paper describes a parallel discrete event simulator, Neuron Time Warp-Multi Thread (NTW-MT), developed for the simulation of reaction diffusion models of neurons. To the best of our knowledge, this is the first parallel discrete event simulator oriented towards stochastic simulation of chemical reactions in a neuron. The simulator was developed as part of the NEURON project. NTW-MT is optimistic and thread-based, which attempts to capitalize on multi-core architectures used in high performance machines. It makes use of a multi-level queue for the pending event set and a single roll-back message in place of individual anti-messages to disperse contention and decrease the overhead of processing rollbacks. Global Virtual Time is computed asynchronously both within and among processes to get rid of the overhead for synchronizing threads. Memory usage is managed in order to avoid locking and unlocking when allocating and de-allocating memory and to maximize cache locality. We verified our simulator on a calcium buffer model. We examined its performance on a calcium wave model, comparing it to the performance of a process based optimistic simulator and a threaded simulator which uses a single priority queue for each thread. Our multi-threaded simulator is shown to achieve superior performance to these simulators. Finally, we demonstrated the scalability of our simulator on a larger CICR model and a more detailed CICR model.
Single event effect testing of the Intel 80386 family and the 80486 microprocessor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moran, A.; LaBel, K.; Gates, M.
The authors present single event effect test results for the Intel 80386 microprocessor, the 80387 coprocessor, the 82380 peripheral device, and on the 80486 microprocessor. Both single event upset and latchup conditions were monitored.
Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani
2016-01-01
This paper presents a novel adaptive neural network (NN) control of single-input and single-output uncertain nonlinear discrete-time systems under event sampled NN inputs. In this control scheme, the feedback signals are transmitted, and the NN weights are tuned in an aperiodic manner at the event sampled instants. After reviewing the NN approximation property with event sampled inputs, an adaptive state estimator (SE), consisting of linearly parameterized NNs, is utilized to approximate the unknown system dynamics in an event sampled context. The SE is viewed as a model and its approximated dynamics and the state vector, during any two events, are utilized for the event-triggered controller design. An adaptive event-trigger condition is derived by using both the estimated NN weights and a dead-zone operator to determine the event sampling instants. This condition both facilitates the NN approximation and reduces the transmission of feedback signals. The ultimate boundedness of both the NN weight estimation error and the system state vector is demonstrated through the Lyapunov approach. As expected, during an initial online learning phase, events are observed more frequently. Over time with the convergence of the NN weights, the inter-event times increase, thereby lowering the number of triggered events. These claims are illustrated through the simulation results.
Compendium of Current Single Event Effects for Candidate Spacecraft Electronics for NASA
NASA Technical Reports Server (NTRS)
O'Bryan, Martha V.; Label, Kenneth A.; Chen, Dakai; Campola, Michael J.; Casey, Megan C.; Lauenstein, Jean-Marie; Pellish, Jonathan A.; Ladbury, Raymond L.; Berg, Melanie D.
2015-01-01
NASA spacecraft are subjected to a harsh space environment that includes exposure to various types of ionizing radiation. The performance of electronic devices in a space radiation environment are often limited by their susceptibility to single event effects (SEE). Ground-based testing is used to evaluate candidate spacecraft electronics to determine risk to spaceflight applications. Interpreting the results of radiation testing of complex devices is and adequate understanding of the test condition is critical. Studies discussed herein were undertaken to establish the application-specific sensitivities of candidate spacecraft and emerging electronic devices to single-event upset (SEU), single-event latchup (SEL), single-event gate rupture (SEGR), single-event burnout (SEB), and single-event transient (SET). For total ionizing dose (TID) and displacement damage dose (DDD) results, see a companion paper submitted to the 2015 Institute of Electrical and Electronics Engineers (IEEE) Nuclear and Space Radiation Effects Conference (NSREC) Radiation Effects Data Workshop (REDW) entitled "compendium of Current Total Ionizing Dose and Displacement Damage for Candidate Spacecraft Electronics for NASA by M. Campola, et al.
NASA Technical Reports Server (NTRS)
Wang, Shuguang; Sobel, Adam H.; Fridlind, Ann; Feng, Zhe; Comstock, Jennifer M.; Minnis, Patrick; Nordeen, Michele L.
2015-01-01
The recently completed CINDY/DYNAMO field campaign observed two Madden-Julian oscillation (MJO) events in the equatorial Indian Ocean from October to December 2011. Prior work has indicated that the moist static energy anomalies in these events grew and were sustained to a significant extent by radiative feedbacks. We present here a study of radiative fluxes and clouds in a set of cloud-resolving simulations of these MJO events. The simulations are driven by the large-scale forcing data set derived from the DYNAMO northern sounding array observations, and carried out in a doubly periodic domain using the Weather Research and Forecasting (WRF) model. Simulated cloud properties and radiative fluxes are compared to those derived from the S-PolKa radar and satellite observations. To accommodate the uncertainty in simulated cloud microphysics, a number of single-moment (1M) and double-moment (2M) microphysical schemes in the WRF model are tested. The 1M schemes tend to underestimate radiative flux anomalies in the active phases of the MJO events, while the 2M schemes perform better, but can overestimate radiative flux anomalies. All the tested microphysics schemes exhibit biases in the shapes of the histograms of radiative fluxes and radar reflectivity. Histograms of radiative fluxes and brightness temperature indicate that radiative biases are not evenly distributed; the most significant bias occurs in rainy areas with OLR less than 150 W/ cu sq in the 2M schemes. Analysis of simulated radar reflectivities indicates that this radiative flux uncertainty is closely related to the simulated stratiform cloud coverage. Single-moment schemes underestimate stratiform cloudiness by a factor of 2, whereas 2M schemes simulate much more stratiform cloud.
Modeling the effect of pathogenic mutations on the conformational landscape of protein kinases.
Saladino, Giorgio; Gervasio, Francesco Luigi
2016-04-01
Most proteins assume different conformations to perform their cellular functions. This conformational dynamics is physiologically regulated by binding events and post-translational modifications, but can also be affected by pathogenic mutations. Atomistic molecular dynamics simulations complemented by enhanced sampling approaches are increasingly used to probe the effect of mutations on the conformational dynamics and on the underlying conformational free energy landscape of proteins. In this short review we discuss recent successful examples of simulations used to understand the molecular mechanism underlying the deregulation of physiological conformational dynamics due to non-synonymous single point mutations. Our examples are mostly drawn from the protein kinase family. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altinok, Ozgur
A sample of charged-current single pion production events for the semi- exclusive channel ν µ + CH → µ -π 0 + nucleon(s) has been obtained using neutrino exposures of the MINERvA detector. Differential cross sections for muon momentum, muon production angle, pion momentum, pion production angle, and four-momentum transfer square Q 2 are reported and are compared to a GENIE-based simulation. The cross section versus neutrino energy is also re- ported. The effects of pion final-state interactions on these cross sections are investigated. The effect of baryon resonance suppression at low Q 2 is examined and an event re-weight used by two previous experiments is shown to improve the data versus simulation agreement. The differential cross sections for Q 2 for Eν < 4.0 GeV and E ν ≥ 4.0 GeV are examined and the shapes of these distributions are compared to those from the experiment’smore » $$\\bar{v}$$ µ-CC (π 0) measurement. The polarization of the pπ 0 system is measured and compared to the simulation predictions. The hadronic invariant mass W distribution is examined for evidence of resonance content, and a search is reported for evidence of a two-particle two-hole (2p2h) contribution. All of the differential cross-section measurements of this Thesis are compared with published MINERvA measurements for ν µ-CC (π +) and \\bar{v}$ µ-CC (π 0) processes.« less
Tactile Instrument for Aviation
2000-07-30
response times using 8 tactor locations was repeated with a dual memory /tracking task or an air combat simulation to evaluate the effectiveness of the...Global Positioning/Inertial Navigation System technologies into a single system for evaluation in an UH-60 Helicopter. A 10-event test operation was... evaluation of the following technology areas need to be pursued: • Integration of tactile instruments with helmet mounted displays and 3D audio displays
Simulation of Rate-Related (Dead-Time) Losses In Passive Neutron Multiplicity Counting Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, L.G.; Norman, P.I.; Leadbeater, T.W.
Passive Neutron Multiplicity Counting (PNMC) based on Multiplicity Shift Register (MSR) electronics (a form of time correlation analysis) is a widely used non-destructive assay technique for quantifying spontaneously fissile materials such as Pu. At high event rates, dead-time losses perturb the count rates with the Singles, Doubles and Triples being increasingly affected. Without correction these perturbations are a major source of inaccuracy in the measured count rates and assay values derived from them. This paper presents the simulation of dead-time losses and investigates the effect of applying different dead-time models on the observed MSR data. Monte Carlo methods have beenmore » used to simulate neutron pulse trains for a variety of source intensities and with ideal detection geometry, providing an event by event record of the time distribution of neutron captures within the detection system. The action of the MSR electronics was modelled in software to analyse these pulse trains. Stored pulse trains were perturbed in software to apply the effects of dead-time according to the chosen physical process; for example, the ideal paralysable (extending) and non-paralysable models with an arbitrary dead-time parameter. Results of the simulations demonstrate the change in the observed MSR data when the system dead-time parameter is varied. In addition, the paralysable and non-paralysable models of deadtime are compared. These results form part of a larger study to evaluate existing dead-time corrections and to extend their application to correlated sources. (authors)« less
The use of dwell time cross-correlation functions to study single-ion channel gating kinetics.
Ball, F G; Kerry, C J; Ramsey, R L; Sansom, M S; Usherwood, P N
1988-01-01
The derivation of cross-correlation functions from single-channel dwell (open and closed) times is described. Simulation of single-channel data for simple gating models, alongside theoretical treatment, is used to demonstrate the relationship of cross-correlation functions to underlying gating mechanisms. It is shown that time irreversibility of gating kinetics may be revealed in cross-correlation functions. Application of cross-correlation function analysis to data derived from the locust muscle glutamate receptor-channel provides evidence for multiple gateway states and time reversibility of gating. A model for the gating of this channel is used to show the effect of omission of brief channel events on cross-correlation functions. PMID:2462924
Chirase, N K; Purdy, C W; Avampato, J M
2004-04-01
Dust is an environmental stressor and can become extensive in agricultural production systems. Thirty-six female, Spanish goats (average BW 21.1 kg, SEM = 1.31; age = 4 mo) were randomly assigned to simulated dust events or no dust, with or without tilmicosin phosphate treatment in a 2 x 2 factorial arrangement of treatments to determine effects on performance, rectal temperature, and leukocyte changes. All goats were fed a standard growing diet (13.6% CP) consisting of 37% roughage and 63% concentrate (DM basis). Feed intake was measured daily, and BW (unshrunk) measured individually every 7 d. The tilmicosin-treated group received tilmicosin phosphate (10 mg/kg BW s.c.) before starting the study. Goats exposed to dust were enclosed as a group inside a canvass tent for 4 h each day and ground feed yard manure dust (mean particle size 100 microm) was aerosolized inside the tent to simulate a dust event. There was one single dust event (Phase I) followed by rectal temperature measurement, and heparinized blood collection for complete cell counts at 0 (pretrial), 4, 12, 20, 44, 68, and 210 h after dust exposure. This was followed by 21 d of chronic dust events (Phase II). The sampling procedures for Phase II were exactly the same as in Phase I, except that samples were obtained daily at 0 (before dust application), 4, 8, and 12 h after each dust event. Dust treatment had no effect (P > 0.05) on feed intake or ADG, but the gain:feed (G:F) ratio was lower (P < 0.05) in the control goats than the dust exposed group. Tilmicosin phosphate-treated goats had a higher (P < 0.05) G:F ratio than untreated goats. Dust exposure increased (P < 0.002), but tilmicosin treatment decreased (P < 0.05) rectal temperature at 4 and 8 h. Dust exposure increased (P < 0.02) blood lymphocyte counts compared with controls. These results suggest that simulated dust events altered rectal temperature and leukocyte counts of goats.
Resources for Radiation Test Data
NASA Technical Reports Server (NTRS)
O'Bryan, Martha V.; Casey, Megan C.; Lauenstein, Jean-Marie; LaBel, Ken
2016-01-01
The performance of electronic devices in a space radiation environment is often limited by susceptibility to single-event effects (SEE), total ionizing dose (TID), and displacement damage (DD). Interpreting the results of SEE, TID, and DD testing of complex devices is quite difficult given the rapidly changing nature of both technology and the related radiation issues. Radiation testing is performed to establish the sensitivities of candidate spacecraft electronics to single-event upset (SEU), single-event latchup (SEL), single-event gate rupture (SEGR), single-event burnout (SEB), single-event transients (SETs), TID, and DD effects. Knowing where to search for these test results is a valuable resource for the aerospace engineer or spacecraft design engineer. This poster is intended to be a resource tool for finding radiation test data.
Triggering Mechanism for Neutron Induced Single-Event Burnout in Power Devices
NASA Astrophysics Data System (ADS)
Shoji, Tomoyuki; Nishida, Shuichi; Hamada, Kimimori
2013-04-01
Cosmic ray neutrons can trigger catastrophic failures in power devices. It has been reported that parasitic transistor action causes single-event burnout (SEB) in power metal-oxide-semiconductor field-effect transistors (MOSFETs) and insulated gate bipolar transistors (IGBTs). However, power diodes do not have an inherent parasitic transistor. In this paper, we describe the mechanism triggering SEB in power diodes for the first time using transient device simulation. Initially, generated electron-hole pairs created by incident recoil ions generate transient current, which increases the electron density in the vicinity of the n-/n+ boundary. The space charge effect of the carriers leads to an increase in the strength of the electric field at the n-/n+ boundary. Finally, the onset of impact ionization at the n-/n+ boundary can trigger SEB. Furthermore, this failure is closely related to diode secondary breakdown. It was clarified that the impact ionization at the n-/n+ boundary is a key point of the mechanism triggering SEB in power devices.
Reliability Assessment of GaN Power Switches
2015-04-17
Possibilities for single event burnout testing were examined as well. Device simulation under the conditions of some of the testing was performed on...reverse-bias (HTRB) and single electron burnout (SEE) tests. 8. Refine test structures, circuits, and procedures, and, if possible, develop
Single event burnout sensitivity of embedded field effect transistors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koga, R.; Crain, S.H.; Crawford, K.B.
Observations of single event burnout (SEB) in embedded field effect transistors are reported. Both SEB and other single event effects are presented for several pulse width modulation and high frequency devices. The microscope has been employed to locate and to investigate the damaged areas. A model of the damage mechanism based on the results so obtained is described.
Single event burnout sensitivity of embedded field effect transistors
NASA Astrophysics Data System (ADS)
Koga, R.; Crain, S. H.; Crawford, K. B.; Yu, P.; Gordon, M. J.
1999-12-01
Observations of single event burnout (SEB) in embedded field effect transistors are reported. Both SEB and other single event effects are presented for several pulse width modulation and high frequency devices. The microscope has been employed to locate and to investigate the damaged areas. A model of the damage mechanism based on the results so obtained is described.
Application of the WEPS and SWEEP models to non-agricultural disturbed lands.
Tatarko, J; van Donk, S J; Ascough, J C; Walker, D G
2016-12-01
Wind erosion not only affects agricultural productivity but also soil, air, and water quality. Dust and specifically particulate matter ≤10 μm (PM-10) has adverse effects on respiratory health and also reduces visibility along roadways, resulting in auto accidents. The Wind Erosion Prediction System (WEPS) was developed by the USDA-Agricultural Research Service to simulate wind erosion and provide for conservation planning on cultivated agricultural lands. A companion product, known as the Single-Event Wind Erosion Evaluation Program (SWEEP), has also been developed which consists of the stand-alone WEPS erosion submodel combined with a graphical interface to simulate soil loss from single (i.e., daily) wind storm events. In addition to agricultural lands, wind driven dust emissions also occur from other anthropogenic sources such as construction sites, mined and reclaimed areas, landfills, and other disturbed lands. Although developed for agricultural fields, WEPS and SWEEP are useful tools for simulating erosion by wind for non-agricultural lands where typical agricultural practices are not employed. On disturbed lands, WEPS can be applied for simulating long-term (i.e., multi-year) erosion control strategies. SWEEP on the other hand was developed specifically for disturbed lands and can simulate potential soil loss for site- and date-specific planned surface conditions and control practices. This paper presents novel applications of WEPS and SWEEP for developing erosion control strategies on non-agricultural disturbed lands. Erosion control planning with WEPS and SWEEP using water and other dust suppressants, wind barriers, straw mulch, re-vegetation, and other management practices is demonstrated herein through the use of comparative simulation scenarios. The scenarios confirm the efficacy of the WEPS and SWEEP models as valuable tools for supporting the design of erosion control plans for disturbed lands that are not only cost-effective but also incorporate a science-based approach to risk assessment.
STORM WATER MANAGEMENT MODEL USER'S MANUAL VERSION 5.0
The EPA Storm Water Management Model (SWMM) is a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. SWMM was first developed in 1971 and has undergone several major upgrade...
Storm Water Management Model Reference Manual Volume I, Hydrology
SWMM is a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. The runoff component of SWMM operates on a collection of subcatchment areas that receive precipitation and gene...
Storm Water Management Model Reference Manual Volume II – Hydraulics
SWMM is a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. The runoff component of SWMM operates on a collection of subcatchment areas that receive precipitation and gene...
NASA Astrophysics Data System (ADS)
Batailly, Alain; Agrapart, Quentin; Millecamps, Antoine; Brunel, Jean-François
2016-08-01
This contribution addresses a confrontation between the experimental simulation of a rotor/stator interaction case initiated by structural contacts with numerical predictions made with an in-house numerical strategy. Contrary to previous studies carried out within the low-pressure compressor of an aircraft engine, this interaction is found to be non-divergent: high amplitudes of vibration are experimentally observed and numerically predicted over a short period of time. An in-depth analysis of experimental data first allows for a precise characterization of the interaction as a rubbing event involving the first torsional mode of a single blade. Numerical results are in good agreement with experimental observations: the critical angular speed, the wear patterns on the casing as well as the blade dynamics are accurately predicted. Through out the article, the in-house numerical strategy is also confronted to another numerical strategy that may be found in the literature for the simulation of rubbing events: key differences are underlined with respect to the prediction of non-linear interaction phenomena.
Schroeder, Indra
2015-01-01
Abstract A main ingredient for the understanding of structure/function correlates of ion channels is the quantitative description of single-channel gating and conductance. However, a wealth of information provided from fast current fluctuations beyond the temporal resolution of the recording system is often ignored, even though it is close to the time window accessible to molecular dynamics simulations. This kind of current fluctuations provide a special technical challenge, because individual opening/closing or blocking/unblocking events cannot be resolved, and the resulting averaging over undetected events decreases the single-channel current. Here, I briefly summarize the history of fast-current fluctuation analysis and focus on the so-called “beta distributions.” This tool exploits characteristics of current fluctuation-induced excess noise on the current amplitude histograms to reconstruct the true single-channel current and kinetic parameters. A guideline for the analysis and recent applications demonstrate that a construction of theoretical beta distributions by Markov Model simulations offers maximum flexibility as compared to analytical solutions. PMID:26368656
NASA Technical Reports Server (NTRS)
Lauenstein, J.-M.; Casey, M. C.; Campola, M. A.; Phan, A. M.; Wilcox, E. P.; Topper, A. D.; Ladbury, R. L.
2017-01-01
This study was being undertaken to determine the single event effect susceptibility of the commercial Vishay 60-V TrenchFET power MOSFET. Heavy-ion testing was conducted at the Texas AM University Cyclotron Single Event Effects Test Facility (TAMU) and the Lawrence Berkeley National Laboratory BASE Cyclotron Facility (LBNL). In addition, initial 200-MeV proton testing was conducted at Massachusetts General Hospital (MGH) Francis H. Burr Proton Beam Therapy Center. Testing was performed to evaluate this device for single-event effects from lower-LET, lighter ions relevant to higher risk tolerant space missions.
NASA Technical Reports Server (NTRS)
Perez, Christopher E.; Berg, Melanie D.; Friendlich, Mark R.
2011-01-01
Motivation for this work is: (1) Accurately characterize digital signal processor (DSP) core single-event effect (SEE) behavior (2) Test DSP cores across a large frequency range and across various input conditions (3) Isolate SEE analysis to DSP cores alone (4) Interpret SEE analysis in terms of single-event upsets (SEUs) and single-event transients (SETs) (5) Provide flight missions with accurate estimate of DSP core error rates and error signatures.
Kernel PLS Estimation of Single-trial Event-related Potentials
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.
2004-01-01
Nonlinear kernel partial least squaes (KPLS) regressior, is a novel smoothing approach to nonparametric regression curve fitting. We have developed a KPLS approach to the estimation of single-trial event related potentials (ERPs). For improved accuracy of estimation, we also developed a local KPLS method for situations in which there exists prior knowledge about the approximate latency of individual ERP components. To assess the utility of the KPLS approach, we compared non-local KPLS and local KPLS smoothing with other nonparametric signal processing and smoothing methods. In particular, we examined wavelet denoising, smoothing splines, and localized smoothing splines. We applied these methods to the estimation of simulated mixtures of human ERPs and ongoing electroencephalogram (EEG) activity using a dipole simulator (BESA). In this scenario we considered ongoing EEG to represent spatially and temporally correlated noise added to the ERPs. This simulation provided a reasonable but simplified model of real-world ERP measurements. For estimation of the simulated single-trial ERPs, local KPLS provided a level of accuracy that was comparable with or better than the other methods. We also applied the local KPLS method to the estimation of human ERPs recorded in an experiment on co,onitive fatigue. For these data, the local KPLS method provided a clear improvement in visualization of single-trial ERPs as well as their averages. The local KPLS method may serve as a new alternative to the estimation of single-trial ERPs and improvement of ERP averages.
NASA Technical Reports Server (NTRS)
Berg, Melanie; LaBel, Kenneth; Campola, Michael; Xapsos, Michael
2017-01-01
We are investigating the application of classical reliability performance metrics combined with standard single event upset (SEU) analysis data. We expect to relate SEU behavior to system performance requirements. Our proposed methodology will provide better prediction of SEU responses in harsh radiation environments with confidence metrics. single event upset (SEU), single event effect (SEE), field programmable gate array devises (FPGAs)
USMC Inventory Control Using Optimization Modeling and Discrete Event Simulation
2016-09-01
release. Distribution is unlimited. USMC INVENTORY CONTROL USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION by Timothy A. Curling...USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION 5. FUNDING NUMBERS 6. AUTHOR(S) Timothy A. Curling 7. PERFORMING ORGANIZATION NAME(S...optimization and discrete -event simulation. This construct can potentially provide an effective means in improving order management decisions. However
Assessment of Critical Events Corridors through Multivariate Cascading Outages Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, Yuri V.; Samaan, Nader A.; Diao, Ruisheng
2011-10-17
Massive blackouts of electrical power systems in North America over the past decade has focused increasing attention upon ways to identify and simulate network events that may potentially lead to widespread network collapse. This paper summarizes a method to simulate power-system vulnerability to cascading failures to a supplied set of initiating events synonymously termed as Extreme Events. The implemented simulation method is currently confined to simulating steady state power-system response to a set of extreme events. The outlined method of simulation is meant to augment and provide a new insight into bulk power transmission network planning that at present remainsmore » mainly confined to maintaining power system security for single and double component outages under a number of projected future network operating conditions. Although one of the aims of this paper is to demonstrate the feasibility of simulating network vulnerability to cascading outages, a more important goal has been to determine vulnerable parts of the network that may potentially be strengthened in practice so as to mitigate system susceptibility to cascading failures. This paper proposes to demonstrate a systematic approach to analyze extreme events and identify vulnerable system elements that may be contributing to cascading outages. The hypothesis of critical events corridors is proposed to represent repeating sequential outages that can occur in the system for multiple initiating events. The new concept helps to identify system reinforcements that planners could engineer in order to 'break' the critical events sequences and therefore lessen the likelihood of cascading outages. This hypothesis has been successfully validated with a California power system model.« less
Topological events in single molecules of E. coli DNA confined in nanochannels
Reifenberger, Jeffrey G.; Dorfman, Kevin D.; Cao, Han
2015-01-01
We present experimental data concerning potential topological events such as folds, internal backfolds, and/or knots within long molecules of double-stranded DNA when they are stretched by confinement in a nanochannel. Genomic DNA from E. coli was labeled near the ‘GCTCTTC’ sequence with a fluorescently labeled dUTP analog and stained with the DNA intercalator YOYO. Individual long molecules of DNA were then linearized and imaged using methods based on the NanoChannel Array technology (Irys® System) available from BioNano Genomics. Data were collected on 189,153 molecules of length greater than 50 kilobases. A custom code was developed to search for abnormal intensity spikes in the YOYO backbone profile along the length of individual molecules. By correlating the YOYO intensity spikes with the aligned barcode pattern to the reference, we were able to correlate the bright intensity regions of YOYO with abnormal stretching in the molecule, which suggests these events were either a knot or a region of internal backfolding within the DNA. We interpret the results of our experiments involving molecules exceeding 50 kilobases in the context of existing simulation data for relatively short DNA, typically several kilobases. The frequency of these events is lower than the predictions from simulations, while the size of the events is larger than simulation predictions and often exceeds the molecular weight of the simulated molecules. We also identified DNA molecules that exhibit large, single folds as they enter the nanochannels. Overall, topological events occur at a low frequency (~7% of all molecules) and pose an easily surmountable obstacle for the practice of genome mapping in nanochannels. PMID:25991508
THE STORM WATER MANAGEMENT MODEL (SWMM) AND RELATED WATERSHED TOOLS DEVELOPMENT
The Storm Water Management Model (SWMM) is a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. It is the only publicly available model capable of performing a comprehensiv...
Storm Water Management Model Reference Manual Volume III – Water Quality
SWMM is a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. The runoff component of SWMM operates on a collection of subcatchment areas that receive precipitation and gene...
Simulation Study of Single-Event Burnout in Power Trench ACCUFETs
NASA Astrophysics Data System (ADS)
Yu, Cheng-Hao; Wang, Ying; Fei, Xin-Xing; Cao, Fei
2016-10-01
This paper presents 2-D numerical simulation results of single-event burnout (SEB) in power trench accumulation mode field effect transistor (ACCUFET) for the first time. In this device, a p+ base region is used to deplete the n- base region to achieve a low leakage current density, and the blocking voltage is supported by the n- drift region. We find that the depth of the p+ base region determines both the leakage current density and SEB performance, as a result, there is a tradeoff relationship between the two characteristics. The 60 V hardened power ACCUFET shown in this paper could demonstrate much better SEB performance without sacrificing the current handling capability compared with the standard UMOSFET. The hardened structure mentioned in this paper indicates that an n buffer layer is added between the epitaxial layer and substrate layer based on a basic power device. As a result, the safe operating area (SOA) of the 60 V, 80 V and 100 V hardened ACCUFET discussed in this paper could reach the value of breakdown voltage when the buffer layer is over a certain value, that can realize safety operation throughout entire LET range.
Estevez, Claudio; Kailas, Aravind
2012-01-01
Millimeter-wave technology shows high potential for future wireless personal area networks, reaching over 1 Gbps transmissions using simple modulation techniques. Current specifications consider dividing the spectrum into effortlessly separable spectrum ranges. These low requirements open a research area in time and space multiplexing techniques for millimeter-waves. In this work a process-stacking multiplexing access algorithm is designed for single channel operation. The concept is intuitive, but its implementation is not trivial. The key to stacking single channel events is to operate while simultaneously obtaining and handling a-posteriori time-frame information of scheduled events. This information is used to shift a global time pointer that the wireless access point manages and uses to synchronize all serviced nodes. The performance of the proposed multiplexing access technique is lower bounded by the performance of legacy TDMA and can significantly improve the effective throughput. Work is validated by simulation results.
NASA Astrophysics Data System (ADS)
von Trentini, F.; Schmid, F. J.; Braun, M.; Brisette, F.; Frigon, A.; Leduc, M.; Martel, J. L.; Willkofer, F.; Wood, R. R.; Ludwig, R.
2017-12-01
Meteorological extreme events seem to become more frequent in the present and future, and a seperation of natural climate variability and a clear climate change effect on these extreme events gains more and more interest. Since there is only one realisation of historical events, natural variability in terms of very long timeseries for a robust statistical analysis is not possible with observation data. A new single model large ensemble (SMLE), developed for the ClimEx project (Climate change and hydrological extreme events - risks and perspectives for water management in Bavaria and Québec) is supposed to overcome this lack of data by downscaling 50 members of the CanESM2 (RCP 8.5) with the Canadian CRCM5 regional model (using the EURO-CORDEX grid specifications) for timeseries of 1950-2099 each, resulting in 7500 years of simulated climate. This allows for a better probabilistic analysis of rare and extreme events than any preceding dataset. Besides seasonal sums, several extreme indicators like R95pTOT, RX5day and others are calculated for the ClimEx ensemble and several EURO-CORDEX runs. This enables us to investigate the interaction between natural variability (as it appears in the CanESM2-CRCM5 members) and a climate change signal of those members for past, present and future conditions. Adding the EURO-CORDEX results to this, we can also assess the role of internal model variability (or natural variability) in climate change simulations. A first comparison shows similar magnitudes of variability of climate change signals between the ClimEx large ensemble and the CORDEX runs for some indicators, while for most indicators the spread of the SMLE is smaller than the spread of different CORDEX models.
Single Event Effect Testing of the Analog Devices ADV212
NASA Technical Reports Server (NTRS)
Wilcox, Ted; Campola, Michael; Kadari, Madhu; Nadendla, Seshagiri R.
2017-01-01
The Analog Devices ADV212 was initially tested for single event effects (SEE) at the Texas AM University Cyclotron Facility (TAMU) in July of 2013. Testing revealed a sensitivity to device hang-ups classified as single event functional interrupts (SEFI), soft data errors classified as single event upsets (SEU), and, of particular concern, single event latch-ups (SEL). All error types occurred so frequently as to make accurate measurements of the exposure time, and thus total particle fluence, challenging. To mitigate some of the risk posed by single event latch-ups, circuitry was added to the electrical design to detect a high current event and automatically recycle power and reboot the device. An additional heavy-ion test was scheduled to validate the operation of the recovery circuitry and the continuing functionality of the ADV212 after a substantial number of latch-up events. As a secondary goal, more precise data would be gathered by an improved test method, described in this test report.
Andrews, M. T.; Rising, M. E.; Meierbachtol, K.; ...
2018-06-15
Wmore » hen multiple neutrons are emitted in a fission event they are correlated in both energy and their relative angle, which may impact the design of safeguards equipment and other instrumentation for non-proliferation applications. The most recent release of MCNP 6 . 2 contains the capability to simulate correlated fission neutrons using the event generators CGMF and FREYA . These radiation transport simulations will be post-processed by the detector response code, DRiFT , and compared directly to correlated fission measurements. DRiFT has been previously compared to single detector measurements, its capabilities have been recently expanded with correlated fission simulations in mind. Finally, this paper details updates to DRiFT specific to correlated fission measurements, including tracking source particle energy of all detector events (and non-events), expanded output formats, and digitizer waveform generation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, M. T.; Rising, M. E.; Meierbachtol, K.
Wmore » hen multiple neutrons are emitted in a fission event they are correlated in both energy and their relative angle, which may impact the design of safeguards equipment and other instrumentation for non-proliferation applications. The most recent release of MCNP 6 . 2 contains the capability to simulate correlated fission neutrons using the event generators CGMF and FREYA . These radiation transport simulations will be post-processed by the detector response code, DRiFT , and compared directly to correlated fission measurements. DRiFT has been previously compared to single detector measurements, its capabilities have been recently expanded with correlated fission simulations in mind. Finally, this paper details updates to DRiFT specific to correlated fission measurements, including tracking source particle energy of all detector events (and non-events), expanded output formats, and digitizer waveform generation.« less
Can tokamaks PFC survive a single event of any plasma instabilities?
NASA Astrophysics Data System (ADS)
Hassanein, A.; Sizyuk, V.; Miloshevsky, G.; Sizyuk, T.
2013-07-01
Plasma instability events such as disruptions, edge-localized modes (ELMs), runaway electrons (REs), and vertical displacement events (VDEs) are continued to be serious events and most limiting factors for successful tokamak reactor concept. The plasma-facing components (PFCs), e.g., wall, divertor, and limited surfaces of a tokamak as well as coolant structure materials are subjected to intense particle and heat loads and must maintain a clean and stable surface environment among them and the core/edge plasma. Typical ITER transient events parameters are used for assessing the damage from these four different instability events. HEIGHTS simulation showed that a single event of a disruption, giant ELM, VDE, or RE can cause significant surface erosion (melting and vaporization) damage to PFC, nearby components, and/or structural materials (VDE, RE) melting and possible burnout of coolant tubes that could result in shut down of reactor for extended repair time.
Knowledge-based simulation for aerospace systems
NASA Technical Reports Server (NTRS)
Will, Ralph W.; Sliwa, Nancy E.; Harrison, F. Wallace, Jr.
1988-01-01
Knowledge-based techniques, which offer many features that are desirable in the simulation and development of aerospace vehicle operations, exhibit many similarities to traditional simulation packages. The eventual solution of these systems' current symbolic processing/numeric processing interface problem will lead to continuous and discrete-event simulation capabilities in a single language, such as TS-PROLOG. Qualitative, totally-symbolic simulation methods are noted to possess several intrinsic characteristics that are especially revelatory of the system being simulated, and capable of insuring that all possible behaviors are considered.
Frequency Dependence of Single-Event Upset in Highly Advanced PowerPC Microprocessors
NASA Technical Reports Server (NTRS)
Irom, Farokh; Farmanesh, Farhad; White, Mark; Kouba, Coy K.
2006-01-01
Single-event upset effects from heavy ions were measured for Motorola silicon-on-insulator (SOI) microprocessor with 90 nm feature sizes at three frequencies of 500, 1066 and 1600 MHz. Frequency dependence of single-event upsets is discussed. The results of our studies suggest the single-event upset in registers and D-Cache tend to increase with frequency. This might have important implications for the overall single-event upset trend as technology moves toward higher frequencies.
The Effect of Dust on the Martian Polar Vortices
NASA Technical Reports Server (NTRS)
Guzewich, Scott D.; Toigo, A. D.; Waugh, D. W.
2016-01-01
The influence of atmospheric dust on the dynamics and stability of the martian polar vortices is examined, through analysis of Mars Climate Sounder observations and MarsWRF general circulation model simulations. We show that regional and global dust storms produce transient vortex warming events that partially or fully disrupt the northern winter polar vortex for brief periods. Increased atmospheric dust heating alters the Hadley circulation and shifts the downwelling branch of the circulation poleward, leading to a disruption of the polar vortex for a period of days to weeks. Through our simulations, we find this effect is dependent on the atmospheric heating rate, which can be changed by increasing the amount of dust in the atmosphere or by altering the dust optical properties (e.g., single scattering albedo). Despite this, our simulations show that some level of atmospheric dust is necessary to produce a distinct northern hemisphere winter polar vortex.
The effect of dust on the martian polar vortices
NASA Astrophysics Data System (ADS)
Guzewich, Scott D.; Toigo, A. D.; Waugh, D. W.
2016-11-01
The influence of atmospheric dust on the dynamics and stability of the martian polar vortices is examined, through analysis of Mars Climate Sounder observations and MarsWRF general circulation model simulations. We show that regional and global dust storms produce ;transient vortex warming; events that partially or fully disrupt the northern winter polar vortex for brief periods. Increased atmospheric dust heating alters the Hadley circulation and shifts the downwelling branch of the circulation poleward, leading to a disruption of the polar vortex for a period of days to weeks. Through our simulations, we find this effect is dependent on the atmospheric heating rate, which can be changed by increasing the amount of dust in the atmosphere or by altering the dust optical properties (e.g., single scattering albedo). Despite this, our simulations show that some level of atmospheric dust is necessary to produce a distinct northern hemisphere winter polar vortex.
Method for distributed agent-based non-expert simulation of manufacturing process behavior
Ivezic, Nenad; Potok, Thomas E.
2004-11-30
A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.
Morrison, Abigail; Straube, Sirko; Plesser, Hans Ekkehard; Diesmann, Markus
2007-01-01
Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques.
Stewart, James A.; Brookman, G.; Price, Patrick Michael; ...
2018-04-25
In this study, the evolution and characterization of single-isolated-ion-strikes are investigated by combining atomistic simulations with selected-area electron diffraction (SAED) patterns generated from these simulations. Five molecular dynamics simulations are performed for a single 20 keV primary knock-on atom in bulk crystalline Si. The resulting cascade damage is characterized in two complementary ways. First, the individual cascade events are conventionally quantified through the evolution of the number of defects and the atomic (volumetric) strain associated with these defect structures. These results show that (i) the radiation damage produced is consistent with the Norgett, Robinson, and Torrens model of damage productionmore » and (ii) there is a net positive volumetric strain associated with the cascade structures. Second, virtual SAED patterns are generated for the resulting cascade-damaged structures along several zone axes. The analysis of the corresponding diffraction patterns shows the SAED spots approximately doubling in size, on average, due to broadening induced by the defect structures. Furthermore, the SAED spots are observed to exhibit an average radial outward shift between 0.33% and 0.87% depending on the zone axis. Finally, this characterization approach, as utilized here, is a preliminary investigation in developing methodologies and opportunities to link experimental observations with atomistic simulations to elucidate microstructural damage states.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, James A.; Brookman, G.; Price, Patrick Michael
In this study, the evolution and characterization of single-isolated-ion-strikes are investigated by combining atomistic simulations with selected-area electron diffraction (SAED) patterns generated from these simulations. Five molecular dynamics simulations are performed for a single 20 keV primary knock-on atom in bulk crystalline Si. The resulting cascade damage is characterized in two complementary ways. First, the individual cascade events are conventionally quantified through the evolution of the number of defects and the atomic (volumetric) strain associated with these defect structures. These results show that (i) the radiation damage produced is consistent with the Norgett, Robinson, and Torrens model of damage productionmore » and (ii) there is a net positive volumetric strain associated with the cascade structures. Second, virtual SAED patterns are generated for the resulting cascade-damaged structures along several zone axes. The analysis of the corresponding diffraction patterns shows the SAED spots approximately doubling in size, on average, due to broadening induced by the defect structures. Furthermore, the SAED spots are observed to exhibit an average radial outward shift between 0.33% and 0.87% depending on the zone axis. Finally, this characterization approach, as utilized here, is a preliminary investigation in developing methodologies and opportunities to link experimental observations with atomistic simulations to elucidate microstructural damage states.« less
NASA Astrophysics Data System (ADS)
Stewart, J. A.; Brookman, G.; Price, P.; Franco, M.; Ji, W.; Hattar, K.; Dingreville, R.
2018-04-01
The evolution and characterization of single-isolated-ion-strikes are investigated by combining atomistic simulations with selected-area electron diffraction (SAED) patterns generated from these simulations. Five molecular dynamics simulations are performed for a single 20 keV primary knock-on atom in bulk crystalline Si. The resulting cascade damage is characterized in two complementary ways. First, the individual cascade events are conventionally quantified through the evolution of the number of defects and the atomic (volumetric) strain associated with these defect structures. These results show that (i) the radiation damage produced is consistent with the Norgett, Robinson, and Torrens model of damage production and (ii) there is a net positive volumetric strain associated with the cascade structures. Second, virtual SAED patterns are generated for the resulting cascade-damaged structures along several zone axes. The analysis of the corresponding diffraction patterns shows the SAED spots approximately doubling in size, on average, due to broadening induced by the defect structures. Furthermore, the SAED spots are observed to exhibit an average radial outward shift between 0.33% and 0.87% depending on the zone axis. This characterization approach, as utilized here, is a preliminary investigation in developing methodologies and opportunities to link experimental observations with atomistic simulations to elucidate microstructural damage states.
Single-Event Effect Testing of the Cree C4D40120D Commercial 1200V Silicon Carbide Schottky Diode
NASA Technical Reports Server (NTRS)
Lauenstein, J.-M.; Casey, M. C.; Wilcox, E. P.; Kim, Hak; Topper, A. D.
2014-01-01
This study was undertaken to determine the single event effect (SEE) susceptibility of the commercial silicon carbide 1200V Schottky diode manufactured by Cree, Inc. Heavy-ion testing was conducted at the Texas A&M University Cyclotron Single Event Effects Test Facility (TAMU). Its purpose was to evaluate this device as a candidate for use in the Solar-Electric Propulsion flight project.
Masbruch, Melissa D.; Rumsey, Christine; Gangopadhyay, Subhrendu; Susong, David D.; Pruitt, Tom
2016-01-01
There has been a considerable amount of research linking climatic variability to hydrologic responses in the western United States. Although much effort has been spent to assess and predict changes in surface water resources, little has been done to understand how climatic events and changes affect groundwater resources. This study focuses on characterizing and quantifying the effects of large, multiyear, quasi-decadal groundwater recharge events in the northern Utah portion of the Great Basin for the period 1960–2013. Annual groundwater level data were analyzed with climatic data to characterize climatic conditions and frequency of these large recharge events. Using observed water-level changes and multivariate analysis, five large groundwater recharge events were identified with a frequency of about 11–13 years. These events were generally characterized as having above-average annual precipitation and snow water equivalent and below-average seasonal temperatures, especially during the spring (April through June). Existing groundwater flow models for several basins within the study area were used to quantify changes in groundwater storage from these events. Simulated groundwater storage increases per basin from a single recharge event ranged from about 115 to 205 Mm3. Extrapolating these amounts over the entire northern Great Basin indicates that a single large quasi-decadal recharge event could result in billions of cubic meters of groundwater storage. Understanding the role of these large quasi-decadal recharge events in replenishing aquifers and sustaining water supplies is crucial for long-term groundwater management.
Heavy Ion Irradiation Fluence Dependence for Single-Event Upsets in a NAND Flash Memory
NASA Technical Reports Server (NTRS)
Chen, Dakai; Wilcox, Edward; Ladbury, Raymond L.; Kim, Hak; Phan, Anthony; Seidleck, Christina; Label, Kenneth
2016-01-01
We investigated the single-event effect (SEE) susceptibility of the Micron 16 nm NAND flash, and found that the single-event upset (SEU) cross section varied inversely with cumulative fluence. We attribute the effect to the variable upset sensitivities of the memory cells. Furthermore, the effect impacts only single cell upsets in general. The rate of multiple-bit upsets remained relatively constant with fluence. The current test standards and procedures assume that SEU follow a Poisson process and do not take into account the variability in the error rate with fluence. Therefore, traditional SEE testing techniques may underestimate the on-orbit event rate for a device with variable upset sensitivity.
NASA Astrophysics Data System (ADS)
Wang, Mingna; Yan, Xiaodong; Liu, Jiyuan; Zhang, Xuezhen
2013-11-01
This paper addresses the contribution of urban land use change to near-surface air temperature during the summer extreme heat events of the early twenty-first century in the Beijing-Tianjin-Hebei metropolitan area. This study uses the Weather Research Forecasting model with a single urban canopy model and the newest actual urban cover datasets. The results show that urban land use characteristics that have evolved over the past ~20 years in the Beijing-Tianjin-Hebei metropolitan area have had a significant impact on the extreme temperatures occurring during extreme heat events. Simulations show that new urban development has caused an intensification and expansion of the areas experiencing extreme heat waves with an average increase in temperature of approximately 0.60 °C. This change is most obvious at night with an increase up to 0.95 °C, for which the total contribution of anthropogenic heat is 34 %. We also simulate the effects of geo-engineering strategies increasing the albedo of urban roofs, an effective way of reducing urban heat island, which can reduce the urban mean temperature by approximately 0.51 °C and counter approximately 80 % of the heat wave results from urban sprawl during the last 20 years.
Cigrand, Charles V.
2018-03-26
The U.S. Geological Survey (USGS) in cooperation with the city of West Branch and the Herbert Hoover National Historic Site of the National Park Service assessed flood-mitigation scenarios within the West Branch Wapsinonoc Creek watershed. The scenarios are intended to demonstrate several means of decreasing peak streamflows and improving the conveyance of overbank flows from the West Branch Wapsinonoc Creek and its tributary Hoover Creek where they flow through the city and the Herbert Hoover National Historic Site located within the city.Hydrologic and hydraulic models of the watershed were constructed to assess the flood-mitigation scenarios. To accomplish this, the models used the U.S. Army Corps of Engineers Hydrologic Engineering Center-Hydrologic Modeling System (HEC–HMS) version 4.2 to simulate the amount of runoff and streamflow produced from single rain events. The Hydrologic Engineering Center-River Analysis System (HEC–RAS) version 5.0 was then used to construct an unsteady-state model that may be used for routing streamflows, mapping areas that may be inundated during floods, and simulating the effects of different measures taken to decrease the effects of floods on people and infrastructure.Both models were calibrated to three historic rainfall events that produced peak streamflows ranging between the 2-year and 10-year flood-frequency recurrence intervals at the USGS streamgage (05464942) on Hoover Creek. The historic rainfall events were calibrated by using data from two USGS streamgages along with surveyed high-water marks from one of the events. The calibrated HEC–HMS model was then used to simulate streamflows from design rainfall events of 24-hour duration ranging from a 20-percent to a 1-percent annual exceedance probability. These simulated streamflows were incorporated into the HEC–RAS model.The unsteady-state HEC–RAS model was calibrated to represent existing conditions within the watershed. HEC–RAS model simulations with the existing conditions and streamflows from the design rainfall events were then done to serve as a baseline for evaluating flood-mitigation scenarios. After these simulations were completed, three different flood-mitigation scenarios were developed with HEC–RAS: a detention-storage scenario, a conveyance improvement scenario, and a combination of both. In the detention-storage scenario, four in-channel detention structures were placed upstream from the city of West Branch to attenuate peak streamflows. To investigate possible improvements to conveying floodwaters through the city of West Branch, a section of abandoned railroad embankment and an old truss bridge were removed in the model, because these structures were producing backwater areas during flooding events. The third scenario combines the detention and conveyance scenarios so their joint efficiency could be evaluated. The scenarios with the design rainfall events were run in the HEC–RAS model so their flood-mitigation effects could be analyzed across a wide range of flood magnitudes.
NASA Astrophysics Data System (ADS)
Moncoulon, D.; Labat, D.; Ardon, J.; Onfroy, T.; Leblois, E.; Poulard, C.; Aji, S.; Rémy, A.; Quantin, A.
2013-07-01
The analysis of flood exposure at a national scale for the French insurance market must combine the generation of a probabilistic event set of all possible but not yet occurred flood situations with hazard and damage modeling. In this study, hazard and damage models are calibrated on a 1995-2012 historical event set, both for hazard results (river flow, flooded areas) and loss estimations. Thus, uncertainties in the deterministic estimation of a single event loss are known before simulating a probabilistic event set. To take into account at least 90% of the insured flood losses, the probabilistic event set must combine the river overflow (small and large catchments) with the surface runoff due to heavy rainfall, on the slopes of the watershed. Indeed, internal studies of CCR claim database has shown that approximately 45% of the insured flood losses are located inside the floodplains and 45% outside. 10% other percent are due to seasurge floods and groundwater rise. In this approach, two independent probabilistic methods are combined to create a single flood loss distribution: generation of fictive river flows based on the historical records of the river gauge network and generation of fictive rain fields on small catchments, calibrated on the 1958-2010 Météo-France rain database SAFRAN. All the events in the probabilistic event sets are simulated with the deterministic model. This hazard and damage distribution is used to simulate the flood losses at the national scale for an insurance company (MACIF) and to generate flood areas associated with hazard return periods. The flood maps concern river overflow and surface water runoff. Validation of these maps is conducted by comparison with the address located claim data on a small catchment (downstream Argens).
Network hydraulics inclusion in water quality event detection using multiple sensor stations data.
Oliker, Nurit; Ostfeld, Avi
2015-09-01
Event detection is one of the current most challenging topics in water distribution systems analysis: how regular on-line hydraulic (e.g., pressure, flow) and water quality (e.g., pH, residual chlorine, turbidity) measurements at different network locations can be efficiently utilized to detect water quality contamination events. This study describes an integrated event detection model which combines multiple sensor stations data with network hydraulics. To date event detection modelling is likely limited to single sensor station location and dataset. Single sensor station models are detached from network hydraulics insights and as a result might be significantly exposed to false positive alarms. This work is aimed at decreasing this limitation through integrating local and spatial hydraulic data understanding into an event detection model. The spatial analysis complements the local event detection effort through discovering events with lower signatures by exploring the sensors mutual hydraulic influences. The unique contribution of this study is in incorporating hydraulic simulation information into the overall event detection process of spatially distributed sensors. The methodology is demonstrated on two example applications using base runs and sensitivity analyses. Results show a clear advantage of the suggested model over single-sensor event detection schemes. Copyright © 2015 Elsevier Ltd. All rights reserved.
On the use of Lineal Energy Measurements to Estimate Linear Energy Transfer Spectra
NASA Technical Reports Server (NTRS)
Adams, David A.; Howell, Leonard W., Jr.; Adam, James H., Jr.
2007-01-01
This paper examines the error resulting from using a lineal energy spectrum to represent a linear energy transfer spectrum for applications in the space radiation environment. Lineal energy and linear energy transfer spectra are compared in three diverse but typical space radiation environments. Different detector geometries are also studied to determine how they affect the error. LET spectra are typically used to compute dose equivalent for radiation hazard estimation and single event effect rates to estimate radiation effects on electronics. The errors in the estimations of dose equivalent and single event rates that result from substituting lineal energy spectra for linear energy spectra are examined. It is found that this substitution has little effect on dose equivalent estimates in interplanetary quiet-time environment regardless of detector shape. The substitution has more of an effect when the environment is dominated by solar energetic particles or trapped radiation, but even then the errors are minor especially if a spherical detector is used. For single event estimation, the effect of the substitution can be large if the threshold for the single event effect is near where the linear energy spectrum drops suddenly. It is judged that single event rate estimates made from lineal energy spectra are unreliable and the use of lineal energy spectra for single event rate estimation should be avoided.
Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique
2011-05-01
In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.
Lin, Chih-Hao; Kao, Chung-Yao; Huang, Chong-Ye
2015-01-01
Ambulance diversion (AD) is considered one of the possible solutions to relieve emergency department (ED) overcrowding. Study of the effectiveness of various AD strategies is prerequisite for policy-making. Our aim is to develop a tool that quantitatively evaluates the effectiveness of various AD strategies. A simulation model and a computer simulation program were developed. Three sets of simulations were executed to evaluate AD initiating criteria, patient-blocking rules, and AD intervals, respectively. The crowdedness index, the patient waiting time for service, and the percentage of adverse patients were assessed to determine the effect of various AD policies. Simulation results suggest that, in a certain setting, the best timing for implementing AD is when the crowdedness index reaches the critical value, 1.0 - an indicator that ED is operating at its maximal capacity. The strategy to divert all patients transported by ambulance is more effective than to divert either high-acuity patients only or low-acuity patients only. Given a total allowable AD duration, implementing AD multiple times with short intervals generally has better effect than having a single AD with maximal allowable duration. An input-throughput-output simulation model is proposed for simulating ED operation. Effectiveness of several AD strategies on relieving ED overcrowding was assessed via computer simulations based on this model. By appropriate parameter settings, the model can represent medical resource providers of different scales. It is also feasible to expand the simulations to evaluate the effect of AD strategies on a community basis. The results may offer insights for making effective AD policies. Copyright © 2012. Published by Elsevier B.V.
ERIC Educational Resources Information Center
Sanocki, Thomas; Sulman, Noah
2013-01-01
Three experiments measured the efficiency of monitoring complex scenes composed of changing objects, or events. All events lasted about 4 s, but in a given block of trials, could be of a single type (single task) or of multiple types (multitask, with a total of four event types). Overall accuracy of detecting target events amid distractors was…
NASA Astrophysics Data System (ADS)
Abdullah, J.; Zaini, S. S.; Aziz, M. S. A.; Majid, T. A.; Deraman, S. N. C.; Yahya, W. N. W.
2018-04-01
Single-storey houses are classified as low rise building and vulnerable to damages under windstorm event. This study was carried out with the aim to investigate the pressure distribution and streamlines around an isolated house by considering the effect of terrain characteristics. The topographic features such as flat, depression, ridge, and valley, are considered in this study. This simulation were analysed with Ansys FLUENT 14.0 software package. The result showed the topography characteristics influence the value of pressure coefficient and streamlines especially when the house was located at ridge terrain. The findings strongly suggested that wind analysis should include all topographic features in the analysis in order to establish the true wind force exerted on any structure.
Berthias, F; Feketeová, L; Abdoul-Carime, H; Calvo, F; Farizon, B; Farizon, M; Märk, T D
2018-06-22
Velocity distributions of neutral water molecules evaporated after collision induced dissociation of protonated water clusters H+(H2O)n≤10 were measured using the combined correlated ion and neutral fragment time-of-flight (COINTOF) and velocity map imaging (VMI) techniques. As observed previously, all measured velocity distributions exhibit two contributions, with a low velocity part identified by statistical molecular dynamics (SMD) simulations as events obeying the Maxwell-Boltzmann statistics and a high velocity contribution corresponding to non-ergodic events in which energy redistribution is incomplete. In contrast to earlier studies, where the evaporation of a single molecule was probed, the present study is concerned with events involving the evaporation of up to five water molecules. In particular, we discuss here in detail the cases of two and three evaporated molecules. Evaporation of several water molecules after CID can be interpreted in general as a sequential evaporation process. In addition to the SMD calculations, a Monte Carlo (MC) based simulation was developed allowing the reconstruction of the velocity distribution produced by the evaporation of m molecules from H+(H2O)n≤10 cluster ions using the measured velocity distributions for singly evaporated molecules as the input. The observed broadening of the low-velocity part of the distributions for the evaporation of two and three molecules as compared to the width for the evaporation of a single molecule results from the cumulative recoil velocity of the successive ion residues as well as the intrinsically broader distributions for decreasingly smaller parent clusters. Further MC simulations were carried out assuming that a certain proportion of non-ergodic events is responsible for the first evaporation in such a sequential evaporation series, thereby allowing to model the entire velocity distribution.
A SEU-Hard Flip-Flop for Antifuse FPGAs
NASA Technical Reports Server (NTRS)
Katz, R.; Wang, J. J.; McCollum, J.; Cronquist, B.; Chan, R.; Yu, D.; Kleyner, I.; Day, John H. (Technical Monitor)
2001-01-01
A single event upset (SEU)-hardened flip-flop has been designed and developed for antifuse Field Programmable Gate Array (FPGA) application. Design and application issues, testability, test methods, simulation, and results are discussed.
NASA Astrophysics Data System (ADS)
Palit, Sourav; Chakrabarti, Sandip Kumar; Pal, Sujay; Das, Bakul; Ray, Suman
2016-07-01
Very Low Frequency (VLF) signal at any location on Earth's surface is strongly dependent on the interference of various modes. The modulation effects on VLF signal due to any terrestrial or extra-terrestrial events vary widely from one propagation path to another depending on the interference patterns along these paths. The task of predicting or reproducing the modulation in the values of signal amplitudes or phase between any two transmitting and receiving stations is challenging. In this work we present results of modeling of the VLF signal amplitudes from five different transmitters as observed at a single receiving station in India during a C9.3 class solar flare. In this model we simulate the ionization rates at lower ionospheric heights from actual flare spectra with the GEANT4 Monte Carlo simulation code and find the equilibrium ion densities with a D-region ion-chemistry model. We find the signal amplitude variation along different propagation paths with the LWPC code. Such efforts are essential for an appropriate understanding of the VLF propagation in Earth's ionosphere waveguide and to achieve desired accuracy while using Earth's ionosphere as an efficient detector of such extra-terrestrial ionization events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rutqvist, Jonny; Rinaldi, Antonio P.; Cappa, Frédéric
2013-07-01
We have conducted numerical simulation studies to assess the potential for injection-induced fault reactivation and notable seismic events associated with shale-gas hydraulic fracturing operations. The modeling is generally tuned towards conditions usually encountered in the Marcellus shale play in the Northeastern US at an approximate depth of 1500 m (~;;4,500 feet). Our modeling simulations indicate that when faults are present, micro-seismic events are possible, the magnitude of which is somewhat larger than the one associated with micro-seismic events originating from regular hydraulic fracturing because of the larger surface area that is available for rupture. The results of our simulations indicatedmore » fault rupture lengths of about 10 to 20 m, which, in rare cases can extend to over 100 m, depending on the fault permeability, the in situ stress field, and the fault strength properties. In addition to a single event rupture length of 10 to 20 m, repeated events and aseismic slip amounted to a total rupture length of 50 m, along with a shear offset displacement of less than 0.01 m. This indicates that the possibility of hydraulically induced fractures at great depth (thousands of meters) causing activation of faults and creation of a new flow path that can reach shallow groundwater resources (or even the surface) is remote. The expected low permeability of faults in producible shale is clearly a limiting factor for the possible rupture length and seismic magnitude. In fact, for a fault that is initially nearly-impermeable, the only possibility of larger fault slip event would be opening by hydraulic fracturing; this would allow pressure to penetrate the matrix along the fault and to reduce the frictional strength over a sufficiently large fault surface patch. However, our simulation results show that if the fault is initially impermeable, hydraulic fracturing along the fault results in numerous small micro-seismic events along with the propagation, effectively preventing larger events from occurring. Nevertheless, care should be taken with continuous monitoring of induced seismicity during the entire injection process to detect any runaway fracturing along faults.« less
Sequential parallel comparison design with binary and time-to-event outcomes.
Silverman, Rachel Kloss; Ivanova, Anastasia; Fine, Jason
2018-04-30
Sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials especially trials with possibly high placebo effect. Sequential parallel comparison design is conducted with 2 stages. Participants are randomized between active therapy and placebo in stage 1. Then, stage 1 placebo nonresponders are rerandomized between active therapy and placebo. Data from the 2 stages are pooled to yield a single P value. We consider SPCD with binary and with time-to-event outcomes. For time-to-event outcomes, response is defined as a favorable event prior to the end of follow-up for a given stage of SPCD. We show that for these cases, the usual test statistics from stages 1 and 2 are asymptotically normal and uncorrelated under the null hypothesis, leading to a straightforward combined testing procedure. In addition, we show that the estimators of the treatment effects from the 2 stages are asymptotically normal and uncorrelated under the null and alternative hypothesis, yielding confidence interval procedures with correct coverage. Simulations and real data analysis demonstrate the utility of the binary and time-to-event SPCD. Copyright © 2018 John Wiley & Sons, Ltd.
Resource Contention Management in Parallel Systems
1989-04-01
technical competence include communications, command and control, battle management, information processing, surveillance sensors, intelligence data ...two-simulation approach since they require only a single simulation run. More importantly, since they involve only observed data , they may also be...we use the original, unobservable RAC of Section 2 and handle un- observable transitions by generating artifcial events, when required, using a random
NASA Astrophysics Data System (ADS)
Jin, Wang; Penington, Catherine J.; McCue, Scott W.; Simpson, Matthew J.
2016-10-01
Two-dimensional collective cell migration assays are used to study cancer and tissue repair. These assays involve combined cell migration and cell proliferation processes, both of which are modulated by cell-to-cell crowding. Previous discrete models of collective cell migration assays involve a nearest-neighbour proliferation mechanism where crowding effects are incorporated by aborting potential proliferation events if the randomly chosen target site is occupied. There are two limitations of this traditional approach: (i) it seems unreasonable to abort a potential proliferation event based on the occupancy of a single, randomly chosen target site; and, (ii) the continuum limit description of this mechanism leads to the standard logistic growth function, but some experimental evidence suggests that cells do not always proliferate logistically. Motivated by these observations, we introduce a generalised proliferation mechanism which allows non-nearest neighbour proliferation events to take place over a template of r≥slant 1 concentric rings of lattice sites. Further, the decision to abort potential proliferation events is made using a crowding function, f(C), which accounts for the density of agents within a group of sites rather than dealing with the occupancy of a single randomly chosen site. Analysing the continuum limit description of the stochastic model shows that the standard logistic source term, λ C(1-C), where λ is the proliferation rate, is generalised to a universal growth function, λ C f(C). Comparing the solution of the continuum description with averaged simulation data indicates that the continuum model performs well for many choices of f(C) and r. For nonlinear f(C), the quality of the continuum-discrete match increases with r.
Jin, Wang; Penington, Catherine J; McCue, Scott W; Simpson, Matthew J
2016-10-07
Two-dimensional collective cell migration assays are used to study cancer and tissue repair. These assays involve combined cell migration and cell proliferation processes, both of which are modulated by cell-to-cell crowding. Previous discrete models of collective cell migration assays involve a nearest-neighbour proliferation mechanism where crowding effects are incorporated by aborting potential proliferation events if the randomly chosen target site is occupied. There are two limitations of this traditional approach: (i) it seems unreasonable to abort a potential proliferation event based on the occupancy of a single, randomly chosen target site; and, (ii) the continuum limit description of this mechanism leads to the standard logistic growth function, but some experimental evidence suggests that cells do not always proliferate logistically. Motivated by these observations, we introduce a generalised proliferation mechanism which allows non-nearest neighbour proliferation events to take place over a template of [Formula: see text] concentric rings of lattice sites. Further, the decision to abort potential proliferation events is made using a crowding function, f(C), which accounts for the density of agents within a group of sites rather than dealing with the occupancy of a single randomly chosen site. Analysing the continuum limit description of the stochastic model shows that the standard logistic source term, [Formula: see text], where λ is the proliferation rate, is generalised to a universal growth function, [Formula: see text]. Comparing the solution of the continuum description with averaged simulation data indicates that the continuum model performs well for many choices of f(C) and r. For nonlinear f(C), the quality of the continuum-discrete match increases with r.
Peng, Ting; Sun, Xiaochun; Mumm, Rita H
2014-01-01
From a breeding standpoint, multiple trait integration (MTI) is a four-step process of converting an elite variety/hybrid for value-added traits (e.g. transgenic events) using backcross breeding, ultimately regaining the performance attributes of the target hybrid along with reliable expression of the value-added traits. In the light of the overarching goal of recovering equivalent performance in the finished conversion, this study focuses on the first step of MTI, single event introgression, exploring the feasibility of marker-aided backcross conversion of a target maize hybrid for 15 transgenic events, incorporating eight events into the female hybrid parent and seven into the male parent. Single event introgression is conducted in parallel streams to convert the recurrent parent (RP) for individual events, with the primary objective of minimizing residual non-recurrent parent (NRP) germplasm, especially in the chromosomal proximity to the event (i.e. linkage drag). In keeping with a defined lower limit of 96.66 % overall RP germplasm recovery (i.e. ≤120 cM NRP germplasm given a genome size of 1,788 cM), a breeding goal for each of the 15 single event conversions was developed: <8 cM of residual NRP germplasm across the genome with ~1 cM in the 20 cM region flanking the event. Using computer simulation, we aimed to identify optimal breeding strategies for single event introgression to achieve this breeding goal, measuring efficiency in terms of number of backcross generations required, marker data points needed, and total population size across generations. Various selection schemes classified as three-stage, modified two-stage, and combined selection conducted from BC1 through BC3, BC4, or BC5 were compared. The breeding goal was achieved with a selection scheme involving five generations of marker-aided backcrossing, with BC1 through BC3 selected for the event of interest and minimal linkage drag at population size of 600, and BC4 and BC5 selected for the event of interest and recovery of the RP germplasm across the genome at population size of 400, with selection intensity of 0.01 for all generations. In addition, strategies for choice of donor parent to facilitate conversion efficiency and quality were evaluated. Two essential criteria for choosing an optimal donor parent for a given RP were established: introgression history showing reduction of linkage drag to ~1 cM in the 20 cM region flanking the event and genetic similarity between the RP and potential donor parents. Computer simulation demonstrated that single event conversions with <8 cM residual NRP germplasm can be accomplished by BC5 with no genetic similarity, by BC4 with 30 % genetic similarity, and by BC3 with 86 % genetic similarity using previously converted RPs as event donors. This study indicates that MTI to produce a 'quality' 15-event-stacked hybrid conversion is achievable. Furthermore, it lays the groundwork for a comprehensive approach to MTI by outlining a pathway to produce appropriate starting materials with which to proceed with event pyramiding and trait fixation before version testing.
NASA Astrophysics Data System (ADS)
Chao, Y.; Cheng, C. T.; Hsiao, Y. H.; Hsu, C. T.; Yeh, K. C.; Liu, P. L.
2017-12-01
There are 5.3 typhoons hit Taiwan per year on average in last decade. Typhoon Morakot in 2009, the most severe typhoon, causes huge damage in Taiwan, including 677 casualties and roughly NT 110 billion (3.3 billion USD) in economic loss. Some researches documented that typhoon frequency will decrease but increase in intensity in western North Pacific region. It is usually preferred to use high resolution dynamical model to get better projection of extreme events; because coarse resolution models cannot simulate intense extreme events. Under that consideration, dynamical downscaling climate data was chosen to describe typhoon satisfactorily, this research used the simulation data from AGCM of Meteorological Research Institute (MRI-AGCM). Considering dynamical downscaling methods consume massive computing power, and typhoon number is very limited in a single model simulation, using dynamical downscaling data could cause uncertainty in disaster risk assessment. In order to improve the problem, this research used four sea surfaces temperatures (SSTs) to increase the climate change scenarios under RCP 8.5. In this way, MRI-AGCMs project 191 extreme typhoons in Taiwan (when typhoon center touches 300 km sea area of Taiwan) in late 21th century. SOBEK, a two dimensions flood simulation model, was used to assess the flood risk under four SSTs climate change scenarios in Tainan, Taiwan. The results show the uncertainty of future flood risk assessment is significantly decreased in Tainan, Taiwan in late 21th century. Four SSTs could efficiently improve the problems of limited typhoon numbers in single model simulation.
Single Event Burnout in DC-DC Converters for the LHC Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Claudio H. Rivetta et al.
High voltage transistors in DC-DC converters are prone to catastrophic Single Event Burnout in the LHC radiation environment. This paper presents a systematic methodology to analyze single event effects sensitivity in converters and proposes solutions based on de-rating input voltage and output current or voltage.
Wang, Shuguang; Sobel, Adam H.; Fridlind, Ann; ...
2015-09-25
The recently completed CINDY/DYNAMO field campaign observed two Madden-Julian oscillation (MJO) events in the equatorial Indian Ocean from October to December 2011. Prior work has indicated that the moist static energy anomalies in these events grew and were sustained to a significant extent by radiative feedbacks. We present here a study of radiative fluxes and clouds in a set of cloud-resolving simulations of these MJO events. The simulations are driven by the large scale forcing dataset derived from the DYNAMO northern sounding array observations, and carried out in a doubly-periodic domain using the Weather Research and Forecasting (WRF) model. simulatedmore » cloud properties and radiative fluxes are compared to those derived from the S-Polka radar and satellite observations. Furthermore, to accommodate the uncertainty in simulated cloud microphysics, a number of single moment (1M) and double moment (2M) microphysical schemes in the WRF model are tested.« less
The role of pore geometry in single nanoparticle detection
Davenport, Matthew; Healy, Ken; Pevarnik, Matthew; ...
2012-08-22
In this study, we observe single nanoparticle translocation events via resistive pulse sensing using silicon nitride pores described by a range of lengths and diameters. Pores are prepared by focused ion beam milling in 50 nm-, 100 nm-, and 500 nm-thick silicon nitride membranes with diameters fabricated to accommodate spherical silica nanoparticles with sizes chosen to mimic that of virus particles. In this manner, we are able to characterize the role of pore geometry in three key components of the detection scheme, namely, event magnitude, event duration, and event frequency. We find that the electric field created by the appliedmore » voltage and the pore’s geometry is a critical factor. We develop approximations to describe this field, which are verified with computer simulations, and interactions between particles and this field. In so doing, we formulate what we believe to be the first approximation for the magnitude of ionic current blockage that explicitly addresses the invariance of access resistance of solid-state pores during particle translocation. These approximations also provide a suitable foundation for estimating the zeta potential of the particles and/or pore surface when studied in conjunction with event durations. We also verify that translocation achieved by electro-osmostic transport is an effective means of slowing translocation velocities of highly charged particles without compromising particle capture rate as compared to more traditional approaches based on electrophoretic transport.« less
NASA Astrophysics Data System (ADS)
Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko
2017-08-01
We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.
NASA Astrophysics Data System (ADS)
Masbruch, M.; Rumsey, C.; Gangopadhyay, S.; Susong, D.; Pruitt, T.
2015-12-01
There has been a considerable amount of research linking climatic variability to hydrologic responses in arid and semi-arid regions such as the western United States. Although much effort has been spent to assess and predict changes in surface-water resources, little has been done to understand how climatic events and changes affect groundwater resources. This study focuses on quantifying the effects of large quasi-decadal groundwater recharge events on groundwater in the northern Utah portion of the Great Basin for the period 1960 to 2013. Groundwater-level monitoring data were analyzed with climatic data to characterize climatic conditions and frequency of these large recharge events. Using observed water-level changes and multivariate analysis, five large groundwater recharge events were identified within the study area and period, with a frequency of about 11 to 13 years. These events were generally characterized as having above-average annual precipitation and snow water equivalent and below-average seasonal temperatures, especially during the spring (April through June). Existing groundwater flow models for several basins within the study area were used to quantify changes in groundwater storage from these events. Simulated groundwater storage increases per basin from a single event ranged from about 115 Mm3 (93,000 acre-feet) to 205 Mm3 (166,000 acre-ft). Extrapolating these amounts over the entire northern Great Basin indicates that even a single large quasi-decadal recharge event could result in billions of cubic meters (millions of acre-feet) of groundwater recharge. Understanding the role of these large quasi-decadal recharge events in replenishing aquifers and sustaining water supplies is crucial for making informed water management decisions.
Scatter characterization and correction for simultaneous multiple small-animal PET imaging.
Prasad, Rameshwar; Zaidi, Habib
2014-04-01
The rapid growth and usage of small-animal positron emission tomography (PET) in molecular imaging research has led to increased demand on PET scanner's time. One potential solution to increase throughput is to scan multiple rodents simultaneously. However, this is achieved at the expense of deterioration of image quality and loss of quantitative accuracy owing to enhanced effects of photon attenuation and Compton scattering. The purpose of this work is, first, to characterize the magnitude and spatial distribution of the scatter component in small-animal PET imaging when scanning single and multiple rodents simultaneously and, second, to assess the relevance and evaluate the performance of scatter correction under similar conditions. The LabPET™-8 scanner was modelled as realistically as possible using Geant4 Application for Tomographic Emission Monte Carlo simulation platform. Monte Carlo simulations allow the separation of unscattered and scattered coincidences and as such enable detailed assessment of the scatter component and its origin. Simple shape-based and more realistic voxel-based phantoms were used to simulate single and multiple PET imaging studies. The modelled scatter component using the single-scatter simulation technique was compared to Monte Carlo simulation results. PET images were also corrected for attenuation and the combined effect of attenuation and scatter on single and multiple small-animal PET imaging evaluated in terms of image quality and quantitative accuracy. A good agreement was observed between calculated and Monte Carlo simulated scatter profiles for single- and multiple-subject imaging. In the LabPET™-8 scanner, the detector covering material (kovar) contributed the maximum amount of scatter events while the scatter contribution due to lead shielding is negligible. The out-of field-of-view (FOV) scatter fraction (SF) is 1.70, 0.76, and 0.11% for lower energy thresholds of 250, 350, and 400 keV, respectively. The increase in SF ranged between 25 and 64% when imaging multiple subjects (three to five) of different size simultaneously in comparison to imaging a single subject. The spill-over ratio (SOR) increases with increasing the number of subjects in the FOV. Scatter correction improved the SOR for both water and air cold compartments of single and multiple imaging studies. The recovery coefficients for different body parts of the mouse whole-body and rat whole-body anatomical models were improved for multiple imaging studies following scatter correction. The magnitude and spatial distribution of the scatter component in small-animal PET imaging of single and multiple subjects simultaneously were characterized, and its impact was evaluated in different situations. Scatter correction improves PET image quality and quantitative accuracy for single rat and simultaneous multiple mice and rat imaging studies, whereas its impact is insignificant in single mouse imaging.
Single-Event Effects in Silicon and Silicon Carbide Power Devices
NASA Technical Reports Server (NTRS)
Lauenstein, Jean-Marie; Casey, Megan C.; LaBel, Kenneth A.; Topper, Alyson D.; Wilcox, Edward P.; Kim, Hak; Phan, Anthony M.
2014-01-01
NASA Electronics Parts and Packaging program-funded activities over the past year on single-event effects in silicon and silicon carbide power devices are presented, with focus on SiC device failure signatures.
Discrete-Event Simulation Models of Plasmodium falciparum Malaria
McKenzie, F. Ellis; Wong, Roger C.; Bossert, William H.
2008-01-01
We develop discrete-event simulation models using a single “timeline” variable to represent the Plasmodium falciparum lifecycle in individual hosts and vectors within interacting host and vector populations. Where they are comparable our conclusions regarding the relative importance of vector mortality and the durations of host immunity and parasite development are congruent with those of classic differential-equation models of malaria, epidemiology. However, our results also imply that in regions with intense perennial transmission, the influence of mosquito mortality on malaria prevalence in humans may be rivaled by that of the duration of host infectivity. PMID:18668185
NASA TileWorld manual (system version 2.2)
NASA Technical Reports Server (NTRS)
Philips, Andrew B.; Bresina, John L.
1991-01-01
The commands are documented of the NASA TileWorld simulator, as well as providing information about how to run it and extend it. The simulator, implemented in Common Lisp with Common Windows, encodes a particular range in a spectrum of domains, for controllable research experiments. TileWorld consists of a two dimensional grid of cells, a set of polygonal tiles, and a single agent which can grasp and move tiles. In addition to agent executable actions, there is an external event over which the agent has not control; this event correspond to a 'gust of wind'.
Reliability of Memories Protected by Multibit Error Correction Codes Against MBUs
NASA Astrophysics Data System (ADS)
Ming, Zhu; Yi, Xiao Li; Chang, Liu; Wei, Zhang Jian
2011-02-01
As technology scales, more and more memory cells can be placed in a die. Therefore, the probability that a single event induces multiple bit upsets (MBUs) in adjacent memory cells gets greater. Generally, multibit error correction codes (MECCs) are effective approaches to mitigate MBUs in memories. In order to evaluate the robustness of protected memories, reliability models have been widely studied nowadays. Instead of irradiation experiments, the models can be used to quickly evaluate the reliability of memories in the early design. To build an accurate model, some situations should be considered. Firstly, when MBUs are presented in memories, the errors induced by several events may overlap each other, which is more frequent than single event upset (SEU) case. Furthermore, radiation experiments show that the probability of MBUs strongly depends on angles of the radiation event. However, reliability models which consider the overlap of multiple bit errors and angles of radiation event have not been proposed in the present literature. In this paper, a more accurate model of memories with MECCs is presented. Both the overlap of multiple bit errors and angles of event are considered in the model, which produces a more precise analysis in the calculation of mean time to failure (MTTF) for memory systems under MBUs. In addition, memories with scrubbing and nonscrubbing are analyzed in the proposed model. Finally, we evaluate the reliability of memories under MBUs in Matlab. The simulation results verify the validity of the proposed model.
NASA Technical Reports Server (NTRS)
Reed, Robert A.; Kinnison, Jim; Pickel, Jim; Buchner, Stephen; Marshall, Paul W.; Kniffin, Scott; LaBel, Kenneth A.
2003-01-01
Over the past 27 years, or so, increased concern over single event effects in spacecraft systems has resulted in research, development and engineering activities centered around a better understanding of the space radiation environment, single event effects predictive methods, ground test protocols, and test facility developments. This research has led to fairly well developed methods for assessing the impact of the space radiation environment on systems that contain SEE sensitive devices and the development of mitigation strategies either at the system or device level.
Test report for single event effects of the 80386DX microprocessor
NASA Technical Reports Server (NTRS)
Watson, R. Kevin; Schwartz, Harvey R.; Nichols, Donald K.
1993-01-01
The Jet Propulsion Laboratory Section 514 Single Event Effects (SEE) Testing and Analysis Group has performed a series of SEE tests of certain strategic registers of Intel's 80386DX CHMOS 4 microprocessor. Following a summary of the test techniques and hardware used to gather the data, we present the SEE heavy ion and proton test results. We also describe the registers tested, along with a system impact analysis should these registers experience a single event upset.
Real-time monitoring of Lévy flights in a single quantum system
NASA Astrophysics Data System (ADS)
Issler, M.; Höller, J.; Imamoǧlu, A.
2016-02-01
Lévy flights are random walks where the dynamics is dominated by rare events. Even though they have been studied in vastly different physical systems, their observation in a single quantum system has remained elusive. Here we analyze a periodically driven open central spin system and demonstrate theoretically that the dynamics of the spin environment exhibits Lévy flights. For the particular realization in a single-electron charged quantum dot driven by periodic resonant laser pulses, we use Monte Carlo simulations to confirm that the long waiting times between successive nuclear spin-flip events are governed by a power-law distribution; the corresponding exponent η =-3 /2 can be directly measured in real time by observing the waiting time distribution of successive photon emission events. Remarkably, the dominant intrinsic limitation of the scheme arising from nuclear quadrupole coupling can be minimized by adjusting the magnetic field or by implementing spin echo.
Discussions On Worst-Case Test Condition For Single Event Burnout
NASA Astrophysics Data System (ADS)
Liu, Sandra; Zafrani, Max; Sherman, Phillip
2011-10-01
This paper discusses the failure characteristics of single- event burnout (SEB) on power MOSFETs based on analyzing the quasi-stationary avalanche simulation curves. The analyses show the worst-case test condition for SEB would be using the ion that has the highest mass that would result in the highest transient current due to charge deposition and displacement damage. The analyses also show it is possible to build power MOSFETs that will not exhibit SEB even when tested with the heaviest ion, which have been verified by heavy ion test data on SEB sensitive and SEB immune devices.
FPGAs in Space Environment and Design Techniques
NASA Technical Reports Server (NTRS)
Katz, Richard B.; Day, John H. (Technical Monitor)
2001-01-01
This viewgraph presentation gives an overview of Field Programmable Gate Arrays (FPGA) in the space environment and design techniques. Details are given on the effects of the space radiation environment, total radiation dose, single event upset, single event latchup, single event transient, antifuse technology and gate rupture, proton upsets and sensitivity, and loss of functionality.
An Updated Perspective of Single Event Gate Rupture and Single Event Burnout in Power MOSFETs
NASA Astrophysics Data System (ADS)
Titus, Jeffrey L.
2013-06-01
Studies over the past 25 years have shown that heavy ions can trigger catastrophic failure modes in power MOSFETs [e.g., single-event gate rupture (SEGR) and single-event burnout (SEB)]. In 1996, two papers were published in a special issue of the IEEE Transaction on Nuclear Science [Johnson, Palau, Dachs, Galloway and Schrimpf, “A Review of the Techniques Used for Modeling Single-Event Effects in Power MOSFETs,” IEEE Trans. Nucl. Sci., vol. 43, no. 2, pp. 546-560, April. 1996], [Titus and Wheatley, “Experimental Studies of Single-Event Gate Rupture and Burnout in Vertical Power MOSFETs,” IEEE Trans. Nucl. Sci., vol. 43, no. 2, pp. 533-545, Apr. 1996]. Those two papers continue to provide excellent information and references with regard to SEB and SEGR in vertical planar MOSFETs. This paper provides updated references/information and provides an updated perspective of SEB and SEGR in vertical planar MOSFETs as well as provides references/information to other device types that exhibit SEB and SEGR effects.
An extension of the OpenModelica compiler for using Modelica models in a discrete event simulation
Nutaro, James
2014-11-03
In this article, a new back-end and run-time system is described for the OpenModelica compiler. This new back-end transforms a Modelica model into a module for the adevs discrete event simulation package, thereby extending adevs to encompass complex, hybrid dynamical systems. The new run-time system that has been built within the adevs simulation package supports models with state-events and time-events and that comprise differential-algebraic systems with high index. Finally, although the procedure for effecting this transformation is based on adevs and the Discrete Event System Specification, it can be adapted to any discrete event simulation package.
Current Single Event Effects Results for Candidate Spacecraft Electronics for NASA
NASA Technical Reports Server (NTRS)
OBryan, Martha V.; Seidleck, Christina M.; Carts, Martin A.; LaBel, Kenneth A.; Marshall, Cheryl J.; Reed, Robert A.; Sanders, Anthony B.; Hawkins, Donald K.; Cox, Stephen R.; Kniffin, Scott D.
2004-01-01
We present data on the vulnerability of a variety of candidate spacecraft electronics to proton and heavy ion induced single event effects. Devices tested include digital, analog, linear bipolar, and hybrid devices, among others.
Single Event Effects Results for Candidate Spacecraft Electronics for NASA
NASA Technical Reports Server (NTRS)
O'Bryan, Martha; LaBel, Kenneth A.; Kniffin, Scott D.; Howard, James W., Jr.; Poivey, Christian; Ladbury, Ray L.; Buchner, Stephen P.; Xapsos, Michael; Reed, Robert A.; Sanders, Anthony B.
2003-01-01
We present data on the vulnerability of a variety of candidate spacecraft electronics to proton and heavy ion induced single event effects. Devices tested include digital, analog, linear bipolar, and hybrid devices, among others.
Vadnais, Mary A.; Dodge, Laura E.; Awtrey, Christopher S.; Ricciotti, Hope A.; Golen, Toni H.; Hacker, Michele R.
2013-01-01
Objective The objectives were to determine (i) whether simulation training results in short-term and long-term improvement in the management of uncommon but critical obstetrical events and (ii) to determine whether there was additional benefit from annual exposure to the workshop. Methods Physicians completed a pretest to measure knowledge and confidence in the management of eclampsia, shoulder dystocia, postpartum hemorrhage and vacuum-assisted vaginal delivery. They then attended a simulation workshop and immediately completed a posttest. Residents completed the same posttests 4 and 12 months later, and attending physicians completed the posttest at 12 months. Physicians participated in the same simulation workshop 1 year later and then completed a final posttest. Scores were compared using paired t-tests. Results Physicians demonstrated improved knowledge and comfort immediately after simulation. Residents maintained this improvement at 1 year. Attending physicians remained more comfortable managing these scenarios up to 1 year later; however, knowledge retention diminished with time. Repeating the simulation after 1 year brought additional improvement to physicians. Conclusion Simulation training can result in short-term and contribute to long-term improvement in objective measures of knowledge and comfort level in managing uncommon but critical obstetrical events. Repeat exposure to simulation training after 1 year can yield additional benefits. PMID:22191668
Technology, design, simulation, and evaluation for SEP-hardened circuits
NASA Technical Reports Server (NTRS)
Adams, J. R.; Allred, D.; Barry, M.; Rudeck, P.; Woodruff, R.; Hoekstra, J.; Gardner, H.
1991-01-01
This paper describes the technology, design, simulation, and evaluation for improvement of the Single Event Phenomena (SEP) hardness of gate-array and SRAM cells. Through the use of design and processing techniques, it is possible to achieve an SEP error rate less than 1.0 x 10(exp -10) errors/bit-day for a 9O percent worst-case geosynchronous orbit environment.
Characterization and Simulation of Gunfire with Wavelets
Smallwood, David O.
1999-01-01
Gunfire is used as an example to show how the wavelet transform can be used to characterize and simulate nonstationary random events when an ensemble of events is available. The structural response to nearby firing of a high-firing rate gun has been characterized in several ways as a nonstationary random process. The current paper will explore a method to describe the nonstationary random process using a wavelet transform. The gunfire record is broken up into a sequence of transient waveforms each representing the response to the firing of a single round. A wavelet transform is performed on each of thesemore » records. The gunfire is simulated by generating realizations of records of a single-round firing by computing an inverse wavelet transform from Gaussian random coefficients with the same mean and standard deviation as those estimated from the previously analyzed gunfire record. The individual records are assembled into a realization of many rounds firing. A second-order correction of the probability density function is accomplished with a zero memory nonlinear function. The method is straightforward, easy to implement, and produces a simulated record much like the measured gunfire record.« less
Mioni, Giovanna; Bertucci, Erica; Rosato, Antonella; Terrett, Gill; Rendell, Peter G; Zamuner, Massimo; Stablum, Franca
2017-06-01
Previous studies have shown that traumatic brain injury (TBI) patients have difficulties with prospective memory (PM). Considering that PM is closely linked to independent living it is of primary interest to develop strategies that can improve PM performance in TBI patients. This study employed Virtual Week task as a measure of PM, and we included future event simulation to boost PM performance. Study 1 evaluated the efficacy of the strategy and investigated possible practice effects. Twenty-four healthy participants performed Virtual Week in a no strategy condition, and 24 healthy participants performed it in a mixed condition (no strategy - future event simulation). In Study 2, 18 TBI patients completed the mixed condition of Virtual Week and were compared with the 24 healthy controls who undertook the mixed condition of Virtual Week in Study 1. All participants also completed a neuropsychological evaluation to characterize the groups on level of cognitive functioning. Study 1 showed that participants in the future event simulation condition outperformed participants in the no strategy condition, and these results were not attributable to practice effects. Results of Study 2 showed that TBI patients performed PM tasks less accurately than controls, but that future event simulation can substantially reduce TBI-related deficits in PM performance. The future event simulation strategy also improved the controls' PM performance. These studies showed the value of future event simulation strategy in improving PM performance in healthy participants as well as in TBI patients. TBI patients performed PM tasks less accurately than controls, confirming prospective memory impairment in these patients. Participants in the future event simulation condition out-performed participants in the no strategy condition. Future event simulation can substantially reduce TBI-related deficits in PM performance. Future event simulation strategy also improved the controls' PM performance. © 2017 The British Psychological Society.
Juvenile sparrows preferentially eavesdrop on adult song interactions
Templeton, Christopher N.; Akçay, Çağlar; Campbell, S. Elizabeth; Beecher, Michael D.
2010-01-01
Recent research has demonstrated that bird song learning is influenced by social factors, but so far has been unable to isolate the particular social variables central to the learning process. Here we test the hypothesis that eavesdropping on singing interactions of adults is a key social event in song learning by birds. In a field experiment, we compared the response of juvenile male song sparrows (Melospiza melodia) to simulated adult counter-singing versus simulated solo singing. We used radio telemetry to follow the movements of each focal bird and assess his response to each playback trial. Juveniles approached the playback speakers when exposed to simulated interactive singing of two song sparrows, but not when exposed to simulated solo singing of a single song sparrow, which in fact they treated similar to heterospecific singing. Although the young birds approached simulated counter-singing, neither did they approach closely, nor did they vocalize themselves, suggesting that the primary function of approach was to permit eavesdropping on these singing interactions. These results indicate that during the prime song-learning phase, juvenile song sparrows are attracted to singing interactions between adults but not to singing by a single bird and suggest that singing interactions may be particularly powerful song-tutoring events. PMID:19846461
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Siegel, Andrew R.
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
Romano, Paul K.; Siegel, Andrew R.
2017-07-01
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less
Simulated Rainfall experiments on burned areas
NASA Astrophysics Data System (ADS)
Rulli, Maria Cristina
2010-05-01
Simulated Rainfall experiments were carried out in a Mediterranean area located in Italy, immediately after a forest fire occurrence, to evaluate the effects of forest fire on soil hydraulic properties, runoff and erosion. The selected study area was frequently affected by fire in the last years. Two adjacent 30 mq plots were set up with common physiographic features, and the same fire history, except for the last fire, which burned only one of them. Since both plots were previously subject to the passage of fire 6 years before the last one, one compares the hydrologic response and erosion of an area recently burned (B00) with that of an area burnt 6 years before (B06). Several rainfall simulations were carried out considering different pre-event soil moisture conditions where each rainfall simulation consisted of a single 60 minute application of rainfall with constant intensity of about 76 mm/h. The results show runoff ratio, evaluated for different pre-event soil moisture conditions, ranging from 0 to 2% for B06 plot, and from 21 to 41% for B00. Runoff ratio for the recently burned plot was 60 times higher than for the plot burned six years before, under wet conditions, and 20 times higher, under very wet conditions. A large increase in sediment production also was measured in B00 plot, as compared with that in B06 plot. Suspended sediment yield from B00 plot was more than two orders of magnitude higher than that from B06 plot in all the simulated events. The high runoff and soil losses measured immediately after burning indicate that effective post-fire rehabilitation programs must be carried out to reduce flood risk and soil erosion in recently burned areas. However, the results for the plot burned six year prior show that recovery of the hydrological properties of the soil occurs after the transient post fire modification.
NASA Astrophysics Data System (ADS)
Smith, R. C.; Collins, G. S.; Hill, J.; Piggott, M. D.; Mouradian, S. L.
2015-12-01
Numerical modelling informs risk assessment of tsunami generated by submarine slides; however, for large-scale slides modelling can be complex and computationally challenging. Many previous numerical studies have approximated slides as rigid blocks that moved according to prescribed motion. However, wave characteristics are strongly dependent on the motion of the slide and previous work has recommended that more accurate representation of slide dynamics is needed. We have used the finite-element, adaptive-mesh CFD model Fluidity, to perform multi-material simulations of deformable submarine slide-generated waves at real world scales for a 2D scenario in the Gulf of Mexico. Our high-resolution approach represents slide dynamics with good accuracy, compared to other numerical simulations of this scenario, but precludes tracking of wave propagation over large distances. To enable efficient modelling of further propagation of the waves, we investigate an approach to extract information about the slide evolution from our multi-material simulations in order to drive a single-layer wave propagation model, also using Fluidity, which is much less computationally expensive. The extracted submarine slide geometry and position as a function of time are parameterised using simple polynomial functions. The polynomial functions are used to inform a prescribed velocity boundary condition in a single-layer simulation, mimicking the effect the submarine slide motion has on the water column. The approach is verified by successful comparison of wave generation in the single-layer model with that recorded in the multi-material, multi-layer simulations. We then extend this approach to 3D for further validation of this methodology (using the Gulf of Mexico scenario proposed by Horrillo et al., 2013) and to consider the effect of lateral spreading. This methodology is then used to simulate a series of hypothetical submarine slide events in the Arctic Ocean (based on evidence of historic slides) and examine the hazard posed to the UK coast.
NASA Astrophysics Data System (ADS)
Schildgen, T. F.; Robinson, R. A. J.; Savi, S.; Bookhagen, B.; Tofelde, S.; Strecker, M. R.
2014-12-01
Numerical modelling informs risk assessment of tsunami generated by submarine slides; however, for large-scale slides modelling can be complex and computationally challenging. Many previous numerical studies have approximated slides as rigid blocks that moved according to prescribed motion. However, wave characteristics are strongly dependent on the motion of the slide and previous work has recommended that more accurate representation of slide dynamics is needed. We have used the finite-element, adaptive-mesh CFD model Fluidity, to perform multi-material simulations of deformable submarine slide-generated waves at real world scales for a 2D scenario in the Gulf of Mexico. Our high-resolution approach represents slide dynamics with good accuracy, compared to other numerical simulations of this scenario, but precludes tracking of wave propagation over large distances. To enable efficient modelling of further propagation of the waves, we investigate an approach to extract information about the slide evolution from our multi-material simulations in order to drive a single-layer wave propagation model, also using Fluidity, which is much less computationally expensive. The extracted submarine slide geometry and position as a function of time are parameterised using simple polynomial functions. The polynomial functions are used to inform a prescribed velocity boundary condition in a single-layer simulation, mimicking the effect the submarine slide motion has on the water column. The approach is verified by successful comparison of wave generation in the single-layer model with that recorded in the multi-material, multi-layer simulations. We then extend this approach to 3D for further validation of this methodology (using the Gulf of Mexico scenario proposed by Horrillo et al., 2013) and to consider the effect of lateral spreading. This methodology is then used to simulate a series of hypothetical submarine slide events in the Arctic Ocean (based on evidence of historic slides) and examine the hazard posed to the UK coast.
NASA Astrophysics Data System (ADS)
Schachel, Tilo D.; Metwally, Haidy; Popa, Vlad; Konermann, Lars
2016-11-01
Infusion of NaCl solutions into an electrospray ionization (ESI) source produces [Na( n+1)Cl n ]+ and other gaseous clusters. The n = 4, 13, 22 magic number species have cuboid ground state structures and exhibit elevated abundance in ESI mass spectra. Relatively few details are known regarding the mechanisms whereby these clusters undergo collision-induced dissociation (CID). The current study examines to what extent molecular dynamics (MD) simulations can be used to garner insights into the sequence of events taking place during CID. Experiments on singly charged clusters reveal that the loss of small neutrals is the dominant fragmentation pathway. MD simulations indicate that the clusters undergo extensive structural fluctuations prior to decomposition. Consistent with the experimentally observed behavior, most of the simulated dissociation events culminate in ejection of small neutrals ([NaCl] i , with i = 1, 2, 3). The MD data reveal that the prevalence of these dissociation channels is linked to the presence of short-lived intermediates where a relatively compact core structure carries a small [NaCl] i protrusion. The latter can separate from the parent cluster via cleavage of a single Na-Cl contact. Fragmentation events of this type are kinetically favored over other dissociation channels that would require the quasi-simultaneous rupture of multiple electrostatic contacts. The CID behavior of NaCl cluster ions bears interesting analogies to that of collisionally activated protein complexes. Overall, it appears that MD simulations represent a valuable tool for deciphering the dissociation of noncovalently bound systems in the gas phase.
Distribution of breakage events in random packings of rodlike particles.
Grof, Zdeněk; Štěpánek, František
2013-07-01
Uniaxial compaction and breakage of rodlike particle packing has been studied using a discrete element method simulation. A scaling relationship between the applied stress, the number of breakage events, and the number-mean particle length has been derived and compared with computational experiments. Based on results for a wide range of intrinsic particle strengths and initial particle lengths, it seems that a single universal relation can be used to describe the incidence of breakage events during compaction of rodlike particle layers.
Simulating double-peak hydrographs from single storms over mixed-use watersheds
Yang Yang; Theodore A. Endreny; David J. Nowak
2015-01-01
Two-peak hydrographs after a single rain event are observed in watersheds and storms with distinct volumes contributing as fast and slow runoff. The authors developed a hydrograph model able to quantify these separate runoff volumes to help in estimation of runoff processes and residence times used by watershed managers. The model uses parallel application of two...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ponciroli, Roberto; Passerini, Stefano; Vilim, Richard B.
Advanced reactors are often claimed to be passively safe against unprotected upset events. In common practice, these events are not considered in the context of the plant control system, i.e., the reactor is subjected to classes of unprotected upset events while the normally programmed response of the control system is assumed not to be present. However, this approach constitutes an oversimplification since, depending on the upset involving the control system, an actuator does not necessarily go in the same direction as needed for safety. In this work, dynamic simulations are performed to assess the degree to which the inherent self-regulatingmore » plant response is safe from active control system override. The simulations are meant to characterize the resilience of the plant to unprotected initiators. The initiators were represented and modeled as an actuator going to a hard limit. Consideration of failure is further limited to individual controllers as there is no cross-connect of signals between these controllers. The potential for passive safety override by the control system is then relegated to the single-input single-output controllers. Here, the results show that when the plant control system is designed by taking into account and quantifying the impact of the plant control system on accidental scenarios there is very limited opportunity for the preprogrammed response of the control system to override passive safety protection in the event of an unprotected initiator.« less
Ponciroli, Roberto; Passerini, Stefano; Vilim, Richard B.
2017-06-21
Advanced reactors are often claimed to be passively safe against unprotected upset events. In common practice, these events are not considered in the context of the plant control system, i.e., the reactor is subjected to classes of unprotected upset events while the normally programmed response of the control system is assumed not to be present. However, this approach constitutes an oversimplification since, depending on the upset involving the control system, an actuator does not necessarily go in the same direction as needed for safety. In this work, dynamic simulations are performed to assess the degree to which the inherent self-regulatingmore » plant response is safe from active control system override. The simulations are meant to characterize the resilience of the plant to unprotected initiators. The initiators were represented and modeled as an actuator going to a hard limit. Consideration of failure is further limited to individual controllers as there is no cross-connect of signals between these controllers. The potential for passive safety override by the control system is then relegated to the single-input single-output controllers. Here, the results show that when the plant control system is designed by taking into account and quantifying the impact of the plant control system on accidental scenarios there is very limited opportunity for the preprogrammed response of the control system to override passive safety protection in the event of an unprotected initiator.« less
NASA Astrophysics Data System (ADS)
von Trentini, F.; Schmid, F. J.; Braun, M.; Frigon, A.; Leduc, M.; Martel, J. L.; Willkofer, F.; Wood, R. R.; Ludwig, R.
2017-12-01
Meteorological extreme events seem to become more frequent in the present and future, and a seperation of natural climate variability and a clear climate change effect on these extreme events gains more and more interest. Since there is only one realisation of historical events, natural variability in terms of very long timeseries for a robust statistical analysis is not possible with observation data. A new single model large ensemble (SMLE), developed for the ClimEx project (Climate change and hydrological extreme events - risks and perspectives for water management in Bavaria and Québec) is supposed to overcome this lack of data by downscaling 50 members of the CanESM2 (RCP 8.5) with the Canadian CRCM5 regional model (using the EURO-CORDEX grid specifications) for timeseries of 1950-2099 each, resulting in 7500 years of simulated climate. This allows for a better probabilistic analysis of rare and extreme events than any preceding dataset. Besides seasonal sums, several indicators concerning heatwave frequency, duration and mean temperature a well as number and maximum length of dry periods (cons. days <1mm) are calculated for the ClimEx ensemble and several EURO-CORDEX runs. This enables us to investigate the interaction between natural variability (as it appears in the CanESM2-CRCM5 members) and a climate change signal of those members for past, present and future conditions. Adding the EURO-CORDEX results to this, we can also assess the role of internal model variability (or natural variability) in climate change simulations. A first comparison shows similar magnitudes of variability of climate change signals between the ClimEx large ensemble and the CORDEX runs for some indicators, while for most indicators the spread of the SMLE is smaller than the spread of different CORDEX models.
NASA Astrophysics Data System (ADS)
Intriligator, D. S.; Sun, W.; Detman, T. R.; Dryer, Ph D., M.; Intriligator, J.; Deehr, C. S.; Webber, W. R.; Gloeckler, G.; Miller, W. D.
2015-12-01
Large solar events can have severe adverse global impacts at Earth. These solar events also can propagate throughout the heliopshere and into the interstellar medium. We focus on the July 2012 and Halloween 2003 solar events. We simulate these events starting from the vicinity of the Sun at 2.5 Rs. We compare our three dimensional (3D) time-dependent simulations to available spacecraft (s/c) observations at 1 AU and beyond. Based on the comparisons of the predictions from our simulations with in-situ measurements we find that the effects of these large solar events can be observed in the outer heliosphere, the heliosheath, and even into the interstellar medium. We use two simulation models. The HAFSS (HAF Source Surface) model is a kinematic model. HHMS-PI (Hybrid Heliospheric Modeling System with Pickup protons) is a numerical magnetohydrodynamic solar wind (SW) simulation model. Both HHMS-PI and HAFSS are ideally suited for these analyses since starting at 2.5 Rs from the Sun they model the slowly evolving background SW and the impulsive, time-dependent events associated with solar activity. Our models naturally reproduce dynamic 3D spatially asymmetric effects observed throughout the heliosphere. Pre-existing SW background conditions have a strong influence on the propagation of shock waves from solar events. Time-dependence is a crucial aspect of interpreting s/c data. We show comparisons of our simulation results with STEREO A, ACE, Ulysses, and Voyager s/c observations.
NASA Astrophysics Data System (ADS)
Malanson, G. P.; DeRose, R. J.; Bekker, M. F.
2016-12-01
The consequences of increasing climatic variance while including variability among individuals and populations are explored for range margins of species with a spatially explicit simulation. The model has a single environmental gradient and a single species then extended to two species. Species response to the environment is a Gaussian function with a peak of 1.0 at their peak fitness on the gradient. The variance in the environment is taken from the total variance in the tree ring series of 399 individuals of Pinus edulis in FIA plots in the western USA. The variability is increased by a multiplier of the standard deviation for various doubling times. The variance of individuals in the simulation is drawn from these same series. Inheritance of individual variability is based on the geographic locations of the individuals. The variance for P. edulis is recomputed as time-dependent conditional standard deviations using the GARCH procedure. Establishment and mortality are simulated in a Monte Carlo process with individual variance. Variance for P. edulis does not show a consistent pattern of heteroscedasticity. An obvious result is that increasing variance has deleterious effects on species persistence because extreme events that result in extinctions cannot be balanced by positive anomalies, but even less extreme negative events cannot be balanced by positive anomalies because of biological and spatial constraints. In the two species model the superior competitor is more affected by increasing climatic variance because its response function is steeper at the point of intersection with the other species and so the uncompensated effects of negative anomalies are greater for it. These theoretical results can guide the anticipated need to mitigate the effects of increasing climatic variability on P. edulis range margins. The trailing edge, here subject to increasing drought stress with increasing temperatures, will be more affected by negative anomalies.
Warp-averaging event-related potentials.
Wang, K; Begleiter, H; Porjesz, B
2001-10-01
To align the repeated single trials of the event-related potential (ERP) in order to get an improved estimate of the ERP. A new implementation of the dynamic time warping is applied to compute a warp-average of the single trials. The trilinear modeling method is applied to filter the single trials prior to alignment. Alignment is based on normalized signals and their estimated derivatives. These features reduce the misalignment due to aligning the random alpha waves, explaining amplitude differences in latency differences, or the seemingly small amplitudes of some components. Simulations and applications to visually evoked potentials show significant improvement over some commonly used methods. The new implementation of the dynamic time warping can be used to align the major components (P1, N1, P2, N2, P3) of the repeated single trials. The average of the aligned single trials is an improved estimate of the ERP. This could lead to more accurate results in subsequent analysis.
Markov Jump-Linear Performance Models for Recoverable Flight Control Computers
NASA Technical Reports Server (NTRS)
Zhang, Hong; Gray, W. Steven; Gonzalez, Oscar R.
2004-01-01
Single event upsets in digital flight control hardware induced by atmospheric neutrons can reduce system performance and possibly introduce a safety hazard. One method currently under investigation to help mitigate the effects of these upsets is NASA Langley s Recoverable Computer System. In this paper, a Markov jump-linear model is developed for a recoverable flight control system, which will be validated using data from future experiments with simulated and real neutron environments. The method of tracking error analysis and the plan for the experiments are also described.
Characteristics of Single-Event Upsets in a Fabric Switch (ADS151)
NASA Technical Reports Server (NTRS)
Buchner, Stephen; Carts, Martin A.; McMorrow, Dale; Kim, Hak; Marshall, Paul W.; LaBel, Kenneth A.
2003-01-01
Abstract-Two types of single event effects - bit errors and single event functional interrupts - were observed during heavy-ion testing of the AD8151 crosspoint switch. Bit errors occurred in bursts with the average number of bits in a burst being dependent on both the ion LET and on the data rate. A pulsed laser was used to identify the locations on the chip where the bit errors and single event functional interrupts occurred. Bit errors originated in the switches, drivers, and output buffers. Single event functional interrupts occurred when the laser was focused on the second rank latch containing the data specifying the state of each switch in the 33x17 matrix.
Simulation study on single event burnout in linear doping buffer layer engineered power VDMOSFET
NASA Astrophysics Data System (ADS)
Yunpeng, Jia; Hongyuan, Su; Rui, Jin; Dongqing, Hu; Yu, Wu
2016-02-01
The addition of a buffer layer can improve the device's secondary breakdown voltage, thus, improving the single event burnout (SEB) threshold voltage. In this paper, an N type linear doping buffer layer is proposed. According to quasi-stationary avalanche simulation and heavy ion beam simulation, the results show that an optimized linear doping buffer layer is critical. As SEB is induced by heavy ions impacting, the electric field of an optimized linear doping buffer device is much lower than that with an optimized constant doping buffer layer at a given buffer layer thickness and the same biasing voltages. Secondary breakdown voltage and the parasitic bipolar turn-on current are much higher than those with the optimized constant doping buffer layer. So the linear buffer layer is more advantageous to improving the device's SEB performance. Project supported by the National Natural Science Foundation of China (No. 61176071), the Doctoral Fund of Ministry of Education of China (No. 20111103120016), and the Science and Technology Program of State Grid Corporation of China (No. SGRI-WD-71-13-006).
Single event effects in high-energy accelerators
NASA Astrophysics Data System (ADS)
García Alía, Rubén; Brugger, Markus; Danzeca, Salvatore; Cerutti, Francesco; de Carvalho Saraiva, Joao Pedro; Denz, Reiner; Ferrari, Alfredo; Foro, Lionel L.; Peronnard, Paul; Røed, Ketil; Secondo, Raffaello; Steckert, Jens; Thurel, Yves; Toccafondo, Iacocpo; Uznanski, Slawosz
2017-03-01
The radiation environment encountered at high-energy hadron accelerators strongly differs from the environment relevant for space applications. The mixed-field expected at modern accelerators is composed of charged and neutral hadrons (protons, pions, kaons and neutrons), photons, electrons, positrons and muons, ranging from very low (thermal) energies up to the TeV range. This complex field, which is extensively simulated by Monte Carlo codes (e.g. FLUKA) is due to beam losses in the experimental areas, distributed along the machine (e.g. collimation points) and deriving from the interaction with the residual gas inside the beam pipe. The resulting intensity, energy distribution and proportion of the different particles largely depends on the distance and angle with respect to the interaction point as well as the amount of installed shielding material. Electronics operating in the vicinity of the accelerator will therefore be subject to both cumulative damage from radiation (total ionizing dose, displacement damage) as well as single event effects which can seriously compromise the operation of the machine. This, combined with the extensive use of commercial-off-the-shelf components due to budget, performance and availability reasons, results in the need to carefully characterize the response of the devices and systems to representative radiation conditions.
NASA Astrophysics Data System (ADS)
Moncoulon, D.; Labat, D.; Ardon, J.; Leblois, E.; Onfroy, T.; Poulard, C.; Aji, S.; Rémy, A.; Quantin, A.
2014-09-01
The analysis of flood exposure at a national scale for the French insurance market must combine the generation of a probabilistic event set of all possible (but which have not yet occurred) flood situations with hazard and damage modeling. In this study, hazard and damage models are calibrated on a 1995-2010 historical event set, both for hazard results (river flow, flooded areas) and loss estimations. Thus, uncertainties in the deterministic estimation of a single event loss are known before simulating a probabilistic event set. To take into account at least 90 % of the insured flood losses, the probabilistic event set must combine the river overflow (small and large catchments) with the surface runoff, due to heavy rainfall, on the slopes of the watershed. Indeed, internal studies of the CCR (Caisse Centrale de Reassurance) claim database have shown that approximately 45 % of the insured flood losses are located inside the floodplains and 45 % outside. Another 10 % is due to sea surge floods and groundwater rise. In this approach, two independent probabilistic methods are combined to create a single flood loss distribution: a generation of fictive river flows based on the historical records of the river gauge network and a generation of fictive rain fields on small catchments, calibrated on the 1958-2010 Météo-France rain database SAFRAN. All the events in the probabilistic event sets are simulated with the deterministic model. This hazard and damage distribution is used to simulate the flood losses at the national scale for an insurance company (Macif) and to generate flood areas associated with hazard return periods. The flood maps concern river overflow and surface water runoff. Validation of these maps is conducted by comparison with the address located claim data on a small catchment (downstream Argens).
The Effects of Time Advance Mechanism on Simple Agent Behaviors in Combat Simulations
2011-12-01
modeling packages that illustrate the differences between discrete-time simulation (DTS) and discrete-event simulation ( DES ) methodologies. Many combat... DES ) models , often referred to as “next-event” (Law and Kelton 2000) or discrete time simulation (DTS), commonly referred to as “time-step.” DTS...discrete-time simulation (DTS) and discrete-event simulation ( DES ) methodologies. Many combat models use DTS as their simulation time advance mechanism
Comparison of ground motions from hybrid simulations to nga prediction equations
Star, L.M.; Stewart, J.P.; Graves, R.W.
2011-01-01
We compare simulated motions for a Mw 7.8 rupture scenario on the San Andreas Fault known as the ShakeOut event, two permutations with different hypocenter locations, and a Mw 7.15 Puente Hills blind thrust scenario, to median and dispersion predictions from empirical NGA ground motion prediction equations. We find the simulated motions attenuate faster with distance than is predicted by the NGA models for periods less than about 5.0 s After removing this distance attenuation bias, the average residuals of the simulated events (i.e., event terms) are generally within the scatter of empirical event terms, although the ShakeOut simulation appears to be a high static stress drop event. The intraevent dispersion in the simulations is lower than NGA values at short periods and abruptly increases at 1.0 s due to different simulation procedures at short and long periods. The simulated motions have a depth-dependent basin response similar to the NGA models, and also show complex effects in which stronger basin response occurs when the fault rupture transmits energy into a basin at low angle, which is not predicted by the NGA models. Rupture directivity effects are found to scale with the isochrone parameter ?? 2011, Earthquake Engineering Research Institute.
Why continuous simulation? The role of antecedent moisture in design flood estimation
NASA Astrophysics Data System (ADS)
Pathiraja, S.; Westra, S.; Sharma, A.
2012-06-01
Continuous simulation for design flood estimation is increasingly becoming a viable alternative to traditional event-based methods. The advantage of continuous simulation approaches is that the catchment moisture state prior to the flood-producing rainfall event is implicitly incorporated within the modeling framework, provided the model has been calibrated and validated to produce reasonable simulations. This contrasts with event-based models in which both information about the expected sequence of rainfall and evaporation preceding the flood-producing rainfall event, as well as catchment storage and infiltration properties, are commonly pooled together into a single set of "loss" parameters which require adjustment through the process of calibration. To identify the importance of accounting for antecedent moisture in flood modeling, this paper uses a continuous rainfall-runoff model calibrated to 45 catchments in the Murray-Darling Basin in Australia. Flood peaks derived using the historical daily rainfall record are compared with those derived using resampled daily rainfall, for which the sequencing of wet and dry days preceding the heavy rainfall event is removed. The analysis shows that there is a consistent underestimation of the design flood events when antecedent moisture is not properly simulated, which can be as much as 30% when only 1 or 2 days of antecedent rainfall are considered, compared to 5% when this is extended to 60 days of prior rainfall. These results show that, in general, it is necessary to consider both short-term memory in rainfall associated with synoptic scale dependence, as well as longer-term memory at seasonal or longer time scale variability in order to obtain accurate design flood estimates.
Heavy Ion Irradiation Fluence Dependence for Single-Event Upsets of NAND Flash Memory
NASA Technical Reports Server (NTRS)
Chen, Dakai; Wilcox, Edward; Ladbury, Raymond; Kim, Hak; Phan, Anthony; Seidleck, Christina; LaBel, Kenneth
2016-01-01
We investigated the single-event effect (SEE) susceptibility of the Micron 16 nm NAND flash, and found the single-event upset (SEU) cross section varied inversely with fluence. The SEU cross section decreased with increasing fluence. We attribute the effect to the variable upset sensitivities of the memory cells. The current test standards and procedures assume that SEU follow a Poisson process and do not take into account the variability in the error rate with fluence. Therefore, heavy ion irradiation of devices with variable upset sensitivity distribution using typical fluence levels may underestimate the cross section and on-orbit event rate.
NASA Astrophysics Data System (ADS)
Console, R.; Vannoli, P.; Carluccio, R.
2016-12-01
The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation. The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation.
Temperature Dependence Of Single-Event Effects
NASA Technical Reports Server (NTRS)
Coss, James R.; Nichols, Donald K.; Smith, Lawrence S.; Huebner, Mark A.; Soli, George A.
1990-01-01
Report describes experimental study of effects of temperature on vulnerability of integrated-circuit memories and other electronic logic devices to single-event effects - spurious bit flips or latch-up in logic state caused by impacts of energetic ions. Involved analysis of data on 14 different device types. In most cases examined, vulnerability to these effects increased or remain constant with temperature.
NASA Astrophysics Data System (ADS)
Koontz, S. L.; Atwell, W. A.; Reddell, B.; Rojdev, K.
2010-12-01
In the this paper, we report the results of modeling and simulation studies in which the radiation transport code FLUKA (FLUktuierende KAskade) is used to determine the changes in total ionizing dose (TID) and single-event effect (SEE) environments behind aluminum, polyethylene, carbon, and titanium shielding masses when the assumed form (i.e., Band or Exponential) of the solar particle event (SPE) kinetic energy spectra is changed. FLUKA simulations are fully three dimensional with an isotropic particle flux incident on a concentric spherical shell shielding mass and detector structure. FLUKA is a fully integrated and extensively verified Monte Carlo simulation package for the interaction and transport of high-energy particles and nuclei in matter. The effects are reported of both energetic primary protons penetrating the shield mass and secondary particle showers caused by energetic primary protons colliding with shielding mass nuclei. SPE heavy ion spectra are not addressed. Our results, in agreement with previous studies, show that use of the Exponential form of the event spectra can seriously underestimate spacecraft SPE TID and SEE environments in some, but not all, shielding mass cases. The SPE spectra investigated are taken from four specific SPEs that produced ground-level events (GLEs) during solar cycle 23 (1997-2008). GLEs are produced by highly energetic solar particle events (ESP), i.e., those that contain significant fluences of 700 MeV to 10 GeV protons. Highly energetic SPEs are implicated in increased rates of spacecraft anomalies and spacecraft failures. High-energy protons interact with Earth’s atmosphere via nuclear reaction to produce secondary particles, some of which are neutrons that can be detected at the Earth’s surface by the global neutron monitor network. GLEs are one part of the overall SPE resulting from a particular solar flare or coronal mass ejection event on the sun. The ESP part of the particle event, detected by spacecraft, is often associated with the arrival of a “shock front” at Earth some hours after the arrival of the GLE. The specific SPEs used in this analysis are those of: 1) November 6, 1997 - GLE only; 2) July 14-15, 2000 - GLE from the 14th plus ESP from the 15th; 3) November 4-6, 2001 - GLE and ESP from the 4th; and 4) October 28-29, 2003 - GLE and ESP from the 28th plus GLE from the 29th. The corresponding Band and Exponential spectra used in this paper are like those previously reported.
SINGLE EVENT EFFECTS TEST FACILITY AT OAK RIDGE NATIONAL LABORATORY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riemer, Bernie; Gallmeier, Franz X; Dominik, Laura J
2015-01-01
Increasing use of microelectronics of ever diminishing feature size in avionics systems has led to a growing Single Event Effects (SEE) susceptibility arising from the highly ionizing interactions of cosmic rays and solar particles. Single event effects caused by atmospheric radiation have been recognized in recent years as a design issue for avionics equipment and systems. To ensure a system meets all its safety and reliability requirements, SEE induced upsets and potential system failures need to be considered, including testing of the components and systems in a neutron beam. Testing of ICs and systems for use in radiation environments requiresmore » the utilization of highly advanced laboratory facilities that can run evaluations on microcircuits for the effects of radiation. This paper provides a background of the atmospheric radiation phenomenon and the resulting single event effects, including single event upset (SEU) and latch up conditions. A study investigating requirements for future single event effect irradiation test facilities and developing options at the Spallation Neutron Source (SNS) is summarized. The relatively new SNS with its 1.0 GeV proton beam, typical operation of 5000 h per year, expertise in spallation neutron sources, user program infrastructure, and decades of useful life ahead is well suited for hosting a world-class SEE test facility in North America. Emphasis was put on testing of large avionics systems while still providing tunable high flux irradiation conditions for component tests. Makers of ground-based systems would also be served well by these facilities. Three options are described; the most capable, flexible, and highest-test-capacity option is a new stand-alone target station using about one kW of proton beam power on a gas-cooled tungsten target, with dual test enclosures. Less expensive options are also described.« less
Single Event Effects Test Facility Options at the Oak Ridge National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riemer, Bernie; Gallmeier, Franz X; Dominik, Laura J
2015-01-01
Increasing use of microelectronics of ever diminishing feature size in avionics systems has led to a growing Single Event Effects (SEE) susceptibility arising from the highly ionizing interactions of cosmic rays and solar particles. Single event effects caused by atmospheric radiation have been recognized in recent years as a design issue for avionics equipment and systems. To ensure a system meets all its safety and reliability requirements, SEE induced upsets and potential system failures need to be considered, including testing of the components and systems in a neutron beam. Testing of integrated circuits (ICs) and systems for use in radiationmore » environments requires the utilization of highly advanced laboratory facilities that can run evaluations on microcircuits for the effects of radiation. This paper provides a background of the atmospheric radiation phenomenon and the resulting single event effects, including single event upset (SEU) and latch up conditions. A study investigating requirements for future single event effect irradiation test facilities and developing options at the Spallation Neutron Source (SNS) is summarized. The relatively new SNS with its 1.0 GeV proton beam, typical operation of 5000 h per year, expertise in spallation neutron sources, user program infrastructure, and decades of useful life ahead is well suited for hosting a world-class SEE test facility in North America. Emphasis was put on testing of large avionics systems while still providing tunable high flux irradiation conditions for component tests. Makers of ground-based systems would also be served well by these facilities. Three options are described; the most capable, flexible, and highest-test-capacity option is a new stand-alone target station using about one kW of proton beam power on a gas-cooled tungsten target, with dual test enclosures. Less expensive options are also described.« less
Syed, Hamzah; Jorgensen, Andrea L; Morris, Andrew P
2016-06-01
To evaluate the power to detect associations between SNPs and time-to-event outcomes across a range of pharmacogenomic study designs while comparing alternative regression approaches. Simulations were conducted to compare Cox proportional hazards modeling accounting for censoring and logistic regression modeling of a dichotomized outcome at the end of the study. The Cox proportional hazards model was demonstrated to be more powerful than the logistic regression analysis. The difference in power between the approaches was highly dependent on the rate of censoring. Initial evaluation of single-nucleotide polymorphism association signals using computationally efficient software with dichotomized outcomes provides an effective screening tool for some design scenarios, and thus has important implications for the development of analytical protocols in pharmacogenomic studies.
Impact of a single drop on the same liquid: formation, growth and disintegration of jets
NASA Astrophysics Data System (ADS)
Agbaglah, G. Gilou; Deegan, Robert
2015-11-01
One of the simplest splashing scenarios results from the impact of a single drop on on the same liquid. The traditional understanding of this process is that the impact generates a jet that later breaks up into secondary droplets. Recently it was shown that even this simplest of scenarios is more complicated than expected because multiple jets can be generated from a single impact event and there are bifurcations in the multiplicity of jets. First, we study the formation, growth and disintegration of jets following the impact of a drop on a thin film of the same liquid using a combination of numerical simulations and linear stability theory. We obtain scaling relations from our simulations and use these as inputs to our stability analysis. We also use experiments and numerical simulations of a single drop impacting on a deep pool to examine the bifurcation from a single jet into two jets. Using high speed X-ray imaging methods we show that vortex separation within the drop leads to the formation of a second jet long after the formation of the ejecta sheet.
Evaluation of the Navys Sea/Shore Flow Policy
2016-06-01
Std. Z39.18 i Abstract CNA developed an independent Discrete -Event Simulation model to evaluate and assess the effect of...a more steady manning level, but the variability remains, even if the system is optimized. In building a Discrete -Event Simulation model, we...steady-state model. In FY 2014, CNA developed a Discrete -Event Simulation model to evaluate the impact of sea/shore flow policy (the DES-SSF model
Radiation Requirements and Requirements Flowdown: Single Event Effects (SEEs) and Requirements
NASA Technical Reports Server (NTRS)
LaBel, Kenneth A.
2002-01-01
This short course session provides: (1) an overview of the single particle-induced hazard for space system as they apply in the natural space environment. This shall focus on the implementation of a single event effect hardness assurance (SEEHA) program for systems including system engineering approach and mitigation of effects. (2) The final portion of this session shell provide relevant real-life examples of in-flight performance of systems.
NASA Technical Reports Server (NTRS)
LaBel, Kenneth A.; O'Bryan, Martha V.; Chen, Dakai; Campola, Michael J.; Casey, Megan C.; Pellish, Jonathan A.; Lauenstein, Jean-Marie; Wilcox, Edward P.; Topper, Alyson D.; Ladbury, Raymond L.;
2014-01-01
We present results and analysis investigating the effects of radiation on a variety of candidate spacecraft electronics to proton and heavy ion-induced single-event effects (SEE), proton-induced displacement damage (DD), and total ionizing dose (TID). This paper is a summary of test results.
NASA Astrophysics Data System (ADS)
Kawecki, Stacey; Steiner, Allison L.
2018-01-01
We examine how aerosol composition affects precipitation intensity using the Weather and Research Forecasting Model with Chemistry (version 3.6). By changing the prescribed default hygroscopicity values to updated values from laboratory studies, we test model assumptions about individual component hygroscopicity values of ammonium, sulfate, nitrate, and organic species. We compare a baseline simulation (BASE, using default hygroscopicity values) with four sensitivity simulations (SULF, increasing the sulfate hygroscopicity; ORG, decreasing organic hygroscopicity; SWITCH, using a concentration-dependent hygroscopicity value for ammonium; and ALL, including all three changes) to understand the role of aerosol composition on precipitation during a mesoscale convective system (MCS). Overall, the hygroscopicity changes influence the spatial patterns of precipitation and the intensity. Focusing on the maximum precipitation in the model domain downwind of an urban area, we find that changing the individual component hygroscopicities leads to bulk hygroscopicity changes, especially in the ORG simulation. Reducing bulk hygroscopicity (e.g., ORG simulation) initially causes fewer activated drops, weakened updrafts in the midtroposphere, and increased precipitation from larger hydrometeors. Increasing bulk hygroscopicity (e.g., SULF simulation) simulates more numerous and smaller cloud drops and increases precipitation. In the ALL simulation, a stronger cold pool and downdrafts lead to precipitation suppression later in the MCS evolution. In this downwind region, the combined changes in hygroscopicity (ALL) reduces the overprediction of intense events (>70 mm d-1) and better captures the range of moderate intensity (30-60 mm d-1) events. The results of this single MCS analysis suggest that aerosol composition can play an important role in simulating high-intensity precipitation events.
NASA Astrophysics Data System (ADS)
Wilms, Joern; Guenther, H. Moritz; Dauser, Thomas; Huenemoerder, David P.; Ptak, Andrew; Smith, Randall; Arcus Team
2018-01-01
We present an overview of the end-to-end simulation environment that we are implementing as part of the Arcus phase A Study. With the rcus simulator, we aim to to model the imaging, detection, and event reconstruction properties of the spectrometer. The simulator uses a Monte Carlo ray-trace approach, projecting photons onto the Arcus focal plane from the silicon pore optic mirrors and critical-angle transmission gratings. We simulate the detection and read-out of the photons in the focal plane CCDs with software originally written for the eROSITA and Athena-WFI detectors; we include all relevant detector physics, such as charge splitting, and effects of the detector read-out, such as out of time events. The output of the simulation chain is an event list that closely resembles the data expected during flight. This event list is processed using a prototype event reconstruction chain for the order separation, wavelength calibration, and effective area calibration. The output is compatible with standard X-ray astronomical analysis software.During phase A, the end-to-end simulation approach is used to demonstrate the overall performance of the mission, including a full simulation of the calibration effort. Continued development during later phases of the mission will ensure that the simulator remains a faithful representation of the true mission capabilities, and will ultimately be used as the Arcus calibration model.
Single Color Multiplexed ddPCR Copy Number Measurements and Single Nucleotide Variant Genotyping.
Wood-Bouwens, Christina M; Ji, Hanlee P
2018-01-01
Droplet digital PCR (ddPCR) allows for accurate quantification of genetic events such as copy number variation and single nucleotide variants. Probe-based assays represent the current "gold-standard" for detection and quantification of these genetic events. Here, we introduce a cost-effective single color ddPCR assay that allows for single genome resolution quantification of copy number and single nucleotide variation.
Modelling and Simulation as a Recognizing Method in Education
ERIC Educational Resources Information Center
Stoffa, Veronika
2004-01-01
Computer animation-simulation models of complex processes and events, which are the method of instruction, can be an effective didactic device. Gaining deeper knowledge about objects modelled helps to plan simulation experiments oriented on processes and events researched. Animation experiments realized on multimedia computers can aid easier…
Event-Based Robust Control for Uncertain Nonlinear Systems Using Adaptive Dynamic Programming.
Zhang, Qichao; Zhao, Dongbin; Wang, Ding
2018-01-01
In this paper, the robust control problem for a class of continuous-time nonlinear system with unmatched uncertainties is investigated using an event-based control method. First, the robust control problem is transformed into a corresponding optimal control problem with an augmented control and an appropriate cost function. Under the event-based mechanism, we prove that the solution of the optimal control problem can asymptotically stabilize the uncertain system with an adaptive triggering condition. That is, the designed event-based controller is robust to the original uncertain system. Note that the event-based controller is updated only when the triggering condition is satisfied, which can save the communication resources between the plant and the controller. Then, a single network adaptive dynamic programming structure with experience replay technique is constructed to approach the optimal control policies. The stability of the closed-loop system with the event-based control policy and the augmented control policy is analyzed using the Lyapunov approach. Furthermore, we prove that the minimal intersample time is bounded by a nonzero positive constant, which excludes Zeno behavior during the learning process. Finally, two simulation examples are provided to demonstrate the effectiveness of the proposed control scheme.
Total Dose Effects on Single Event Transients in Digital CMOS and Linear Bipolar Circuits
NASA Technical Reports Server (NTRS)
Buchner, S.; McMorrow, D.; Sibley, M.; Eaton, P.; Mavis, D.; Dusseau, L.; Roche, N. J-H.; Bernard, M.
2009-01-01
This presentation discusses the effects of ionizing radiation on single event transients (SETs) in circuits. The exposure of integrated circuits to ionizing radiation changes electrical parameters. The total ionizing dose effect is observed in both complementary metal-oxide-semiconductor (CMOS) and bipolar circuits. In bipolar circuits, transistors exhibit grain degradation, while in CMOS circuits, transistors exhibit threshold voltage shifts. Changes in electrical parameters can cause changes in single event upset(SEU)/SET rates. Depending on the effect, the rates may increase or decrease. Therefore, measures taken for SEU/SET mitigation might work at the beginning of a mission but not at the end following TID exposure. The effect of TID on SET rates should be considered if SETs cannot be tolerated.
Tudur Smith, Catrin; Gueyffier, François; Kolamunnage‐Dona, Ruwanthi
2017-01-01
Background Joint modelling of longitudinal and time‐to‐event data is often preferred over separate longitudinal or time‐to‐event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time‐to‐event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta‐analysis of joint model estimates from multiple studies. Methods We propose a 2‐stage method for meta‐analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta‐analyses of separate longitudinal or time‐to‐event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Results Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta‐analytic setting where association exists between the longitudinal and time‐to‐event outcomes. Conclusions Where evidence of association between longitudinal and time‐to‐event outcomes exists, results from joint models over standalone analyses should be pooled in 2‐stage meta‐analyses. PMID:29250814
NASA Technical Reports Server (NTRS)
Ladbury, R.; Reed, R. A.; Marshall, P. W.; LaBel, K. A.; Anantaraman, R.; Fox, R.; Sanderson, D. P.; Stolz, A.; Yurkon, J.; Zeller, A. F.;
2004-01-01
The performance of Michigan State University's Single-Event Effects Test Facility (SEETF) during its inaugural runs is evaluated. Beam profiles and other diagnostics are presented, and prospects for future development and testing are discussed.
Single Event Effects in FPGA Devices 2015-2016
NASA Technical Reports Server (NTRS)
Berg, Melanie; LaBel, Kenneth; Pellish, Jonathan
2016-01-01
This presentation provides an overview of single event effects in FPGA devices 2015-2016 including commercial Xilinx V5 heavy ion accelerated testing, Xilinx Kintex-7 heavy ion accelerated testing. Mitigation study, and investigation of various types of triple modular redundancy (TMR) for commercial SRAM based FPGAs.
Single Event Effects in FPGA Devices 2014-2015
NASA Technical Reports Server (NTRS)
Berg, Melanie D.; LaBel, Kenneth A.; Pellish, Jonathan
2015-01-01
This presentation provides an overview of single event effects in FPGA devices 2014-2015 including commercial Xilinx V5 heavy ion accelerated testing, Xilinx Kintex-7 heavy ion accelerated testing. Mitigation study, and investigation of various types of triple modular redundancy (TMR) for commercial SRAM based FPGAs.
Single Event Effects in FPGA Devices 2015-2016
NASA Technical Reports Server (NTRS)
Berg, Melanie; LaBel, Kenneth; Pellish, Jonathan
2016-01-01
This presentation provides an overview of single event effects in FPGA devices 2015-2016 including commercial Xilinx V5 heavy ion accelerated testing, Xilinx Kintex-7 heavy ion accelerated testing, mitigation study, and investigation of various types of triple modular redundancy (TMR) for commercial SRAM based FPGAs.
NASA Technical Reports Server (NTRS)
1983-01-01
Topics discussed include radiation effects in devices; the basic mechanisms of radiation effects in structures and materials; radiation effects in integrated circuits; spacecraft charging and space radiation effects; hardness assurance for devices and systems; and radiation transport, energy deposition and charge collection. Papers are presented on the mechanisms of small instabilities in irradiated MOS transistors, on the radiation effects on oxynitride gate dielectrics, on the discharge characteristics of a simulated solar cell array, and on latchup in CMOS devices from heavy ions. Attention is also given to proton upsets in orbit, to the modeling of single-event upset in bipolar integrated circuits, to high-resolution studies of the electrical breakdown of soil, and to a finite-difference solution of Maxwell's equations in generalized nonorthogonal coordinates.
Serial Founder Effects During Range Expansion: A Spatial Analog of Genetic Drift
Slatkin, Montgomery; Excoffier, Laurent
2012-01-01
Range expansions cause a series of founder events. We show that, in a one-dimensional habitat, these founder events are the spatial analog of genetic drift in a randomly mating population. The spatial series of allele frequencies created by successive founder events is equivalent to the time series of allele frequencies in a population of effective size ke, the effective number of founders. We derive an expression for ke in a discrete-population model that allows for local population growth and migration among established populations. If there is selection, the net effect is determined approximately by the product of the selection coefficients and the number of generations between successive founding events. We use the model of a single population to compute analytically several quantities for an allele present in the source population: (i) the probability that it survives the series of colonization events, (ii) the probability that it reaches a specified threshold frequency in the last population, and (iii) the mean and variance of the frequencies in each population. We show that the analytic theory provides a good approximation to simulation results. A consequence of our approximation is that the average heterozygosity of neutral alleles decreases by a factor of 1 – 1/(2ke) in each new population. Therefore, the population genetic consequences of surfing can be predicted approximately by the effective number of founders and the effective selection coefficients, even in the presence of migration among populations. We also show that our analytic results are applicable to a model of range expansion in a continuously distributed population. PMID:22367031
Serial founder effects during range expansion: a spatial analog of genetic drift.
Slatkin, Montgomery; Excoffier, Laurent
2012-05-01
Range expansions cause a series of founder events. We show that, in a one-dimensional habitat, these founder events are the spatial analog of genetic drift in a randomly mating population. The spatial series of allele frequencies created by successive founder events is equivalent to the time series of allele frequencies in a population of effective size ke, the effective number of founders. We derive an expression for ke in a discrete-population model that allows for local population growth and migration among established populations. If there is selection, the net effect is determined approximately by the product of the selection coefficients and the number of generations between successive founding events. We use the model of a single population to compute analytically several quantities for an allele present in the source population: (i) the probability that it survives the series of colonization events, (ii) the probability that it reaches a specified threshold frequency in the last population, and (iii) the mean and variance of the frequencies in each population. We show that the analytic theory provides a good approximation to simulation results. A consequence of our approximation is that the average heterozygosity of neutral alleles decreases by a factor of 1-1/(2ke) in each new population. Therefore, the population genetic consequences of surfing can be predicted approximately by the effective number of founders and the effective selection coefficients, even in the presence of migration among populations. We also show that our analytic results are applicable to a model of range expansion in a continuously distributed population.
Stick-slip Cycles and Tidal Modulation of Ice Stream Flow
NASA Astrophysics Data System (ADS)
Lipovsky, B.; Dunham, E. M.
2016-12-01
The reactivation of a single dormant Antarctic ice stream would double the continent's mass imbalance. Despite importance of understanding the likelihood of such an event, direct observation of the basal processes that lead to the activation and stagnation of streaming ice are minimal. As the only ice stream undergoing stagnation, the Whillans Ice Plain (WIP) occupies a central role in our understanding of these subglacial processes. Complicating matters is the observation, from GPS records, that the WIP experiences most of its motion during episodes of rapid sliding. These sliding events are tidally modulated and separated by 12 hour periods of quiescence. We conduct numerical simulations of ice stream stick-slip cycles. Our simulations include rate- and state-dependent frictional sliding, tidal forcing, inertia, upstream loading in a cross-stream, thickness-averaged formulation. Our principal finding is that ice stream motion may respond to ocean tidal forcing with one of two end member behaviors. In one limit, tidally modulated slip events have rupture velocities that approach the shear wave speed and slip events have a duration that scales with the ice stream width divided by the shear wave speed. In the other limit, tidal modulation results in ice stream sliding velocities with lower amplitude variation but at much longer timescales, i.e. semi-diurnal and longer. This latter behavior more closely mimics the behavior of several active ice streams (Bindschadler, Rutford). We find that WIP slip events exist between these two end member behaviors: rupture velocities are far below the inertial limit yet sliding occurs only episodically. The continuum of sliding behaviors is governed by a critical ice stream width over which slip event nucleate. When the critical width is much longer than the ice stream width, slip events are unable to nucleate. The critical width depends on the subglacial effective pressure, ice thickness, and frictional and elastic constitutive parameters. One implication of our work is that, because the transition from steady to episodic sliding may occur by changing subglacial effective pressure, changing effective pressure may be responsible for the stagnation of the WIP.
Mixed-field GCR Simulations for Radiobiological Research using Ground Based Accelerators
NASA Astrophysics Data System (ADS)
Kim, Myung-Hee Y.; Rusek, Adam; Cucinotta, Francis
Space radiation is comprised of a large number of particle types and energies, which have differential ionization power from high energy protons to high charge and energy (HZE) particles and secondary neutrons produced by galactic cosmic rays (GCR). Ground based accelerators such as the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL) are used to simulate space radiation for radiobiology research and dosimetry, electronics parts, and shielding testing using mono-energetic beams for single ion species. As a tool to support research on new risk assessment models, we have developed a stochastic model of heavy ion beams and space radiation effects, the GCR Event-based Risk Model computer code (GERMcode). For radiobiological research on mixed-field space radiation, a new GCR simulator at NSRL is proposed. The NSRL-GCR simulator, which implements the rapid switching mode and the higher energy beam extraction to 1.5 GeV/u, can integrate multiple ions into a single simulation to create GCR Z-spectrum in major energy bins. After considering the GCR environment and energy limitations of NSRL, a GCR reference field is proposed after extensive simulation studies using the GERMcode. The GCR reference field is shown to reproduce the Z and LET spectra of GCR behind shielding within 20 percents accuracy compared to simulated full GCR environments behind shielding. A major challenge for space radiobiology research is to consider chronic GCR exposure of up to 3-years in relation to simulations with cell and animal models of human risks. We discuss possible approaches to map important biological time scales in experimental models using ground-based simulation with extended exposure of up to a few weeks and fractionation approaches at a GCR simulator.
Mixed-field GCR Simulations for Radiobiological Research Using Ground Based Accelerators
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Rusek, Adam; Cucinotta, Francis A.
2014-01-01
Space radiation is comprised of a large number of particle types and energies, which have differential ionization power from high energy protons to high charge and energy (HZE) particles and secondary neutrons produced by galactic cosmic rays (GCR). Ground based accelerators such as the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL) are used to simulate space radiation for radiobiology research and dosimetry, electronics parts, and shielding testing using mono-energetic beams for single ion species. As a tool to support research on new risk assessment models, we have developed a stochastic model of heavy ion beams and space radiation effects, the GCR Event-based Risk Model computer code (GERMcode). For radiobiological research on mixed-field space radiation, a new GCR simulator at NSRL is proposed. The NSRL-GCR simulator, which implements the rapid switching mode and the higher energy beam extraction to 1.5 GeV/u, can integrate multiple ions into a single simulation to create GCR Z-spectrum in major energy bins. After considering the GCR environment and energy limitations of NSRL, a GCR reference field is proposed after extensive simulation studies using the GERMcode. The GCR reference field is shown to reproduce the Z and LET spectra of GCR behind shielding within 20% accuracy compared to simulated full GCR environments behind shielding. A major challenge for space radiobiology research is to consider chronic GCR exposure of up to 3-years in relation to simulations with cell and animal models of human risks. We discuss possible approaches to map important biological time scales in experimental models using ground-based simulation with extended exposure of up to a few weeks and fractionation approaches at a GCR simulator.
Evaluation of the Navys Sea/Shore Flow Policy
2016-06-01
CNA developed an independent Discrete -Event Simulation model to evaluate and assess the effect of alternative sea/shore flow policies. In this study...remains, even if the system is optimized. In building a Discrete -Event Simulation model, we discovered key factors that should be included in the... Discrete -Event Simulation model to evaluate the impact of sea/shore flow policy (the DES-SSF model) and compared the results with the SSFM for one
Kreilinger, Alex; Hiebel, Hannah; Müller-Putz, Gernot R
2016-03-01
This work aimed to find and evaluate a new method for detecting errors in continuous brain-computer interface (BCI) applications. Instead of classifying errors on a single-trial basis, the new method was based on multiple events (MEs) analysis to increase the accuracy of error detection. In a BCI-driven car game, based on motor imagery (MI), discrete events were triggered whenever subjects collided with coins and/or barriers. Coins counted as correct events, whereas barriers were errors. This new method, termed ME method, combined and averaged the classification results of single events (SEs) and determined the correctness of MI trials, which consisted of event sequences instead of SEs. The benefit of this method was evaluated in an offline simulation. In an online experiment, the new method was used to detect erroneous MI trials. Such MI trials were discarded and could be repeated by the users. We found that, even with low SE error potential (ErrP) detection rates, feasible accuracies can be achieved when combining MEs to distinguish erroneous from correct MI trials. Online, all subjects reached higher scores with error detection than without, at the cost of longer times needed for completing the game. Findings suggest that ErrP detection may become a reliable tool for monitoring continuous states in BCI applications when combining MEs. This paper demonstrates a novel technique for detecting errors in online continuous BCI applications, which yields promising results even with low single-trial detection rates.
Spanoudaki, V C; Lau, F W Y; Vandenbroucke, A; Levin, C S
2010-11-01
This study aims to address design considerations of a high resolution, high sensitivity positron emission tomography scanner dedicated to breast imaging. The methodology uses a detailed Monte Carlo model of the system structures to obtain a quantitative evaluation of several performance parameters. Special focus was given to the effect of dense mechanical structures designed to provide mechanical robustness and thermal regulation to the minuscule and temperature sensitive detectors. For the energies of interest around the photopeak (450-700 keV energy window), the simulation results predict a 6.5% reduction in the single photon detection efficiency and a 12.5% reduction in the coincidence photon detection efficiency in the case that the mechanical structures are interspersed between the detectors. However for lower energies, a substantial increase in the number of detected events (approximately 14% and 7% for singles at a 100-200 keV energy window and coincidences at a lower energy threshold of 100 keV, respectively) was observed with the presence of these structures due to backscatter. The number of photon events that involve multiple interactions in various crystal elements is also affected by the presence of the structures. For photon events involving multiple interactions among various crystal elements, the coincidence photon sensitivity is reduced by as much as 20% for a point source at the center of the field of view. There is no observable effect on the intrinsic and the reconstructed spatial resolution and spatial resolution uniformity. Mechanical structures can have a considerable effect on system sensitivity, especially for systems processing multi-interaction photon events. This effect, however, does not impact the spatial resolution. Various mechanical structure designs are currently under evaluation in order to achieve optimum trade-off between temperature stability, accurate detector positioning, and minimum influence on system performance.
Spanoudaki, V. C.; Lau, F. W. Y.; Vandenbroucke, A.; Levin, C. S.
2010-01-01
Purpose: This study aims to address design considerations of a high resolution, high sensitivity positron emission tomography scanner dedicated to breast imaging. Methods: The methodology uses a detailed Monte Carlo model of the system structures to obtain a quantitative evaluation of several performance parameters. Special focus was given to the effect of dense mechanical structures designed to provide mechanical robustness and thermal regulation to the minuscule and temperature sensitive detectors. Results: For the energies of interest around the photopeak (450–700 keV energy window), the simulation results predict a 6.5% reduction in the single photon detection efficiency and a 12.5% reduction in the coincidence photon detection efficiency in the case that the mechanical structures are interspersed between the detectors. However for lower energies, a substantial increase in the number of detected events (approximately 14% and 7% for singles at a 100–200 keV energy window and coincidences at a lower energy threshold of 100 keV, respectively) was observed with the presence of these structures due to backscatter. The number of photon events that involve multiple interactions in various crystal elements is also affected by the presence of the structures. For photon events involving multiple interactions among various crystal elements, the coincidence photon sensitivity is reduced by as much as 20% for a point source at the center of the field of view. There is no observable effect on the intrinsic and the reconstructed spatial resolution and spatial resolution uniformity. Conclusions: Mechanical structures can have a considerable effect on system sensitivity, especially for systems processing multi-interaction photon events. This effect, however, does not impact the spatial resolution. Various mechanical structure designs are currently under evaluation in order to achieve optimum trade-off between temperature stability, accurate detector positioning, and minimum influence on system performance. PMID:21158296
NASA Technical Reports Server (NTRS)
Saab, T.; Figueroa-Feliciano, E.; Iyomoto, N.; Bandler, S. R.; Chervenak, J.; Finkbeiner, F.; Kelley, R.; Kilbourne, C. A.; Porter, F. S.; Sadleir, J.
2005-01-01
An ideal microcalorimeter is characterized by a constant energy resolution across the sensor's dynamic range. Any dependence of pulse shape on the position within the absorber where an event occurs leads to a degradation in resolution that is linear with event s energy (excess broadening). In this paper we present a numerical simulation that was developed to model the variation in pulse shape with position based on the thermal conductivity within the absorber and between the absorber, sensor, and heat bath, for arbitrarily shaped absorbers and sensors. All the parameters required for the simulation can be measured from actual devices. We describe how the thermal conductivity of the absorber material is determined by comparing the results of this model with data taken from a position sensitive detector in which any position dependent effect is purposely emphasized by making a long, narrow absorber that is read out by sensors on both end. Finally, we present the implications for excess broadening given the measured parameters of our X-ray microcalorimeters.
Effects of multiple scattering on time- and depth-resolved signals in airborne lidar systems
NASA Technical Reports Server (NTRS)
Punjabi, A.; Venable, D. D.
1986-01-01
A semianalytic Monte Carlo radiative transfer model (SALMON) is employed to probe the effects of multiple-scattering events on the time- and depth-resolved lidar signals from homogeneous aqueous media. The effective total attenuation coefficients in the single-scattering approximation are determined as functions of dimensionless parameters characterizing the lidar system and the medium. Results show that single-scattering events dominate when these parameters are close to their lower bounds and that when their values exceed unity multiple-scattering events dominate.
Skipped Stage Modeling and Testing of the CPAS Main Parachutes
NASA Technical Reports Server (NTRS)
Varela, Jose G.; Ray, Eric S.
2013-01-01
The Capsule Parachute Assembly System (CPAS) has undergone the transition from modeling a skipped stage event using a simulation that treats a cluster of parachutes as a single composite canopy to the capability of simulating each parachute individually. This capability along with data obtained from skipped stage flight tests has been crucial in modeling the behavior of a skipping canopy as well as the crowding effect on non-skipping ("lagging") neighbors. For the finite mass inflation of CPAS Main parachutes, the cluster is assumed to inflate nominally through the nominal fill time, at which point the skipping parachute continues inflating. This sub-phase modeling method was used to reconstruct three flight tests involving skipped stages. Best fit inflation parameters were determined for both the skipping and lagging canopies.
NASA Technical Reports Server (NTRS)
Reddell, Brandon D.; Bailey, Charles R.; Nguyen, Kyson V.; O'Neill, Patrick M.; Wheeler, Scott; Gaza, Razvan; Cooper, Jaime; Kalb, Theodore; Patel, Chirag; Beach, Elden R.;
2017-01-01
We present the results of Single Event Effects (SEE) testing with high energy protons and with low and high energy heavy ions for electrical components considered for Low Earth Orbit (LEO) and for deep space applications.
NASA Astrophysics Data System (ADS)
Tian, A.; Degeling, A. W.
2017-12-01
Simulations and observations had shown that single positive/negative solar wind dynamic pressure pulse would excite geomagnetic impulsive events along with ionosphere and/or magnetosphere vortices which are connected by field aligned currents(FACs). In this work, a large scale ( 9min) magnetic hole event in solar wind provided us with the opportunity to study the effects of positive-negative pulse pair (△p/p 1) on the magnetosphere and ionosphere. During the magnetic hole event, two traveling convection vortices (TCVs, anti-sunward) first in anticlockwise then in clockwise rotation were detected by geomagnetic stations located along the 10:30MLT meridian. At the same time, another pair of ionospheric vortices azimuthally seen up to 3 MLT first in clockwise then in counter-clockwise rotation were also appeared in the afternoon sector( 14MLT) and centered at 75 MLAT without obvious tailward propagation feature. The duskside vortices were also confirmed in SuperDARN radar data. We simulated the process of magnetosphere struck by a positive-negative pulse pair and it shows that a pair of reversed flow vortices in the magnetosphere equatorial plane appeared which may provide FACs for the vortices observed in ionosphere. Dawn dusk asymmetry of the vortices as well as the global geomagnetism perturbation characteristics were also discussed.
SU-G-IeP4-12: Performance of In-111 Coincident Gamma-Ray Counting: A Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pahlka, R; Kappadath, S; Mawlawi, O
2016-06-15
Purpose: The decay of In-111 results in a non-isotropic gamma-ray cascade, which is normally imaged using a gamma camera. Creating images with a gamma camera using coincident gamma-rays from In-111 has not been previously studied. Our objective was to explore the feasibility of imaging this cascade as coincidence events and to determine the optimal timing resolution and source activity using Monte Carlo simulations. Methods: GEANT4 was used to simulate the decay of the In-111 nucleus and to model the gamma camera. Each photon emission was assigned a timestamp, and the time delay and angular separation for the second gamma-ray inmore » the cascade was consistent with the known intermediate state half-life of 85ns. The gamma-rays are transported through a model of a Siemens dual head Symbia “S” gamma camera with a 5/8-inch thick crystal and medium energy collimators. A true coincident event was defined as a single 171keV gamma-ray followed by a single 245keV gamma-ray within a specified time window (or vice versa). Several source activities (ranging from 10uCi to 5mCi) with and without incorporation of background counts were then simulated. Each simulation was analyzed using varying time windows to assess random events. The noise equivalent count rate (NECR) was computed based on the number of true and random counts for each combination of activity and time window. No scatter events were assumed since sources were simulated in air. Results: As expected, increasing the timing window increased the total number of observed coincidences albeit at the expense of true coincidences. A timing window range of 200–500ns maximizes the NECR at clinically-used source activities. The background rate did not significantly alter the maximum NECR. Conclusion: This work suggests coincident measurements of In-111 gamma-ray decay can be performed with commercial gamma cameras at clinically-relevant activities. Work is ongoing to assess useful clinical applications.« less
Hoefling, Martin; Lima, Nicola; Haenni, Dominik; Seidel, Claus A. M.; Schuler, Benjamin; Grubmüller, Helmut
2011-01-01
Förster Resonance Energy Transfer (FRET) experiments probe molecular distances via distance dependent energy transfer from an excited donor dye to an acceptor dye. Single molecule experiments not only probe average distances, but also distance distributions or even fluctuations, and thus provide a powerful tool to study biomolecular structure and dynamics. However, the measured energy transfer efficiency depends not only on the distance between the dyes, but also on their mutual orientation, which is typically inaccessible to experiments. Thus, assumptions on the orientation distributions and averages are usually made, limiting the accuracy of the distance distributions extracted from FRET experiments. Here, we demonstrate that by combining single molecule FRET experiments with the mutual dye orientation statistics obtained from Molecular Dynamics (MD) simulations, improved estimates of distances and distributions are obtained. From the simulated time-dependent mutual orientations, FRET efficiencies are calculated and the full statistics of individual photon absorption, energy transfer, and photon emission events is obtained from subsequent Monte Carlo (MC) simulations of the FRET kinetics. All recorded emission events are collected to bursts from which efficiency distributions are calculated in close resemblance to the actual FRET experiment, taking shot noise fully into account. Using polyproline chains with attached Alexa 488 and Alexa 594 dyes as a test system, we demonstrate the feasibility of this approach by direct comparison to experimental data. We identified cis-isomers and different static local environments as sources of the experimentally observed heterogeneity. Reconstructions of distance distributions from experimental data at different levels of theory demonstrate how the respective underlying assumptions and approximations affect the obtained accuracy. Our results show that dye fluctuations obtained from MD simulations, combined with MC single photon kinetics, provide a versatile tool to improve the accuracy of distance distributions that can be extracted from measured single molecule FRET efficiencies. PMID:21629703
3D Thermal and Mechanical Analysis of a Single Event Burnout
NASA Astrophysics Data System (ADS)
Peretti, Gabriela; Demarco, Gustavo; Romero, Eduardo; Tais, Carlos
2015-08-01
This paper presents a study related to thermal and mechanical behavior of power DMOS transistors during a Single Event Burnout (SEB) process. We use a cylindrical heat generation region for emulating the thermal and mechanical phenomena related to the SEB. In this way, it is avoided the complexity of the mathematical treatment of the ion-device interaction. This work considers locating the heat generation region in positions that are more realistic than the ones used in previous work. For performing the study, we formulate and validate a new 3D model for the transistor that maintains the computational cost at reasonable level. The resulting mathematical models are solved by means of the Finite Element Method. The simulations results show that the failure dynamics is dominated by the mechanical stress in the metal layer. Additionally, the time to failure depends on the heat source position, for a given power and dimension of the generation region. The results suggest that 3D modeling should be considered for a detailed study of thermal and mechanical effects induced by SEBs.
Single event effects and laser simulation studies
NASA Technical Reports Server (NTRS)
Kim, Q.; Schwartz, H.; Mccarty, K.; Coss, J.; Barnes, C.
1993-01-01
The single event upset (SEU) linear energy transfer threshold (LETTH) of radiation hardened 64K Static Random Access Memories (SRAM's) was measured with a picosecond pulsed dye laser system. These results were compared with standard heavy ion accelerator (Brookhaven National Laboratory (BNL)) measurements of the same SRAM's. With heavy ions, the LETTH of the Honeywell HC6364 was 27 MeV-sq cm/mg at 125 C compared with a value of 24 MeV-sq cm/mg obtained with the laser. In the case of the second type of 64K SRAM, the IBM640lCRH no upsets were observed at 125 C with the highest LET ions used at BNL. In contrast, the pulsed dye laser tests indicated a value of 90 MeV-sq cm/mg at room temperature for the SEU-hardened IBM SRAM. No latchups or multiple SEU's were observed on any of the SRAM's even under worst case conditions. The results of this study suggest that the laser can be used as an inexpensive laboratory SEU prescreen tool in certain cases.
Effects of complex life cycles on genetic diversity: cyclical parthenogenesis.
Rouger, R; Reichel, K; Malrieu, F; Masson, J P; Stoeckel, S
2016-11-01
Neutral patterns of population genetic diversity in species with complex life cycles are difficult to anticipate. Cyclical parthenogenesis (CP), in which organisms undergo several rounds of clonal reproduction followed by a sexual event, is one such life cycle. Many species, including crop pests (aphids), human parasites (trematodes) or models used in evolutionary science (Daphnia), are cyclical parthenogens. It is therefore crucial to understand the impact of such a life cycle on neutral genetic diversity. In this paper, we describe distributions of genetic diversity under conditions of CP with various clonal phase lengths. Using a Markov chain model of CP for a single locus and individual-based simulations for two loci, our analysis first demonstrates that strong departures from full sexuality are observed after only a few generations of clonality. The convergence towards predictions made under conditions of full clonality during the clonal phase depends on the balance between mutations and genetic drift. Second, the sexual event of CP usually resets the genetic diversity at a single locus towards predictions made under full sexuality. However, this single recombination event is insufficient to reshuffle gametic phases towards full-sexuality predictions. Finally, for similar levels of clonality, CP and acyclic partial clonality (wherein a fixed proportion of individuals are clonally produced within each generation) differentially affect the distribution of genetic diversity. Overall, this work provides solid predictions of neutral genetic diversity that may serve as a null model in detecting the action of common evolutionary or demographic processes in cyclical parthenogens (for example, selection or bottlenecks).
Ventilation of Animal Shelters in Wildland Fire Scenarios
NASA Astrophysics Data System (ADS)
Bova, A. S.; Bohrer, G.; Dickinson, M. B.
2009-12-01
The effects of wildland fires on cavity-nesting birds and bats, as well as fossorial mammals and burrow-using reptiles, are of considerable interest to the fire management community. However, relatively little is known about the degree of protection afforded by various animal shelters in wildland fire events. We present results from our ongoing investigation, utilizing NIST’s Fire Dynamics Simulator (FDS) and experimental data, of the effectiveness of common shelter configurations in protecting animals from combustion products. We compare two sets of simulations with observed experimental results. In the first set, wind tunnel experiments on single-entry room ventilation by Larsen and Heiselberg (2008) were simulated in a large domain resolved into 10 cm cubic cells. The set of 24 simulations comprised all combinations of incident wind speeds of 1,3 and 5 m/s; angles of attack of 0, 45, 90 and 180 degrees from the horizontal normal to the entrance; and temperature differences of 0 and 10 degrees C between the building interior and exterior. Simulation results were in good agreement with experimental data, thus providing a validation of FDS code for further ventilation experiments. In the second set, a cubic simulation domain of ~1m on edge and resolved into 1 cm cubic cells, was set up to represent the experiments by Ar et al. (2004) of wind-induced ventilation of woodpecker cavities. As in the experiments, we simulated wind parallel and perpendicular to the cavity entrance with different mean forcing velocities, and monitored the rates of evacuation of a neutral-buoyancy tracer from the cavity. Simulated ventilation rates in many, though not all, cases fell within the range of experimental data. Reasons for these differences, which include vagueness in the experimental setup, will be discussed. Our simulations provide a tool to estimate the viability of an animal in a shelter as a function of the shelter geometry and the fire intensity. In addition to the above, we explore the role of turbulence and its effect on ventilation rates, especially in single-entrance shelters. The goal of this work is to provide engineering formulas to estimate the probable levels of harmful or irritating combustion products in animal shelters during wildland fires.
NASA Technical Reports Server (NTRS)
Vonroos, O.; Zoutendyk, J.
1983-01-01
When an energetic particle (kinetic energy 0.5 MeV) originating from a radioactive decay or a cosmic ray transverse the active regions of semiconductor devices used in integrated circuit (IC) chips, it leaves along its track a high density electron hole plasma. The subsequent decay of this plasma by drift and diffusion leads to charge collection at the electrodes large enough in most cases to engender a false reading, hence the name single-event upset (SEU). The problem of SEU's is particularly severe within the harsh environment of Jupiter's radiation belts and constitutes therefore a matter of concern for the Galileo mission. The physics of an SEU event is analyzed in some detail. Owing to the predominance of nonlinear space charge effects and the fact that positive (holes) and negative (electrons) charges must be treated on an equal footing, analytical models for the ionized-charge collection and their corresponding currents as a function of time prove to be inadequate even in the simplest case of uniformly doped, abrupt p-n junctions in a one-dimensional geometry. The necessity for full-fledged computer simulation of the pertinent equations governing the electron-hole plasma therefore becomes imperative.
Design of virtual simulation experiment based on key events
NASA Astrophysics Data System (ADS)
Zhong, Zheng; Zhou, Dongbo; Song, Lingxiu
2018-06-01
Considering complex content and lacking of guidance in virtual simulation experiments, the key event technology in VR narrative theory was introduced for virtual simulation experiment to enhance fidelity and vividness process. Based on the VR narrative technology, an event transition structure was designed to meet the need of experimental operation process, and an interactive event processing model was used to generate key events in interactive scene. The experiment of" margin value of bees foraging" based on Biologic morphology was taken as an example, many objects, behaviors and other contents were reorganized. The result shows that this method can enhance the user's experience and ensure experimental process complete and effectively.
Simulation of SEU Cross-sections using MRED under Conditions of Limited Device Information
NASA Technical Reports Server (NTRS)
Lauenstein, J. M.; Reed, R. A.; Weller, R. A.; Mendenhall, M. H.; Warren, K. M.; Pellish, J. A.; Schrimpf, R. D.; Sierawski, B. D.; Massengill, L. W.; Dodd, P. E.;
2007-01-01
This viewgraph presentation reviews the simulation of Single Event Upset (SEU) cross sections using the membrane electrode assembly (MEA) resistance and electrode diffusion (MRED) tool using "Best guess" assumptions about the process and geometry, and direct ionization, low-energy beam test results. This work will also simulate SEU cross-sections including angular and high energy responses and compare the simulated results with beam test data for the validation of the model. Using MRED, we produced a reasonably accurate upset response model of a low-critical charge SRAM without detailed information about the circuit, device geometry, or fabrication process
Fung, Lillia; Boet, Sylvain; Bould, M Dylan; Qosa, Haytham; Perrier, Laure; Tricco, Andrea; Tavares, Walter; Reeves, Scott
2015-01-01
Crisis resource management (CRM) abilities are important for different healthcare providers to effectively manage critical clinical events. This study aims to review the effectiveness of simulation-based CRM training for interprofessional and interdisciplinary teams compared to other instructional methods (e.g., didactics). Interprofessional teams are composed of several professions (e.g., nurse, physician, midwife) while interdisciplinary teams are composed of several disciplines from the same profession (e.g., cardiologist, anaesthesiologist, orthopaedist). Medline, EMBASE, CINAHL, Cochrane Central Register of Controlled Trials, and ERIC were searched using terms related to CRM, crisis management, crew resource management, teamwork, and simulation. Trials comparing simulation-based CRM team training versus any other methods of education were included. The educational interventions involved interprofessional or interdisciplinary healthcare teams. The initial search identified 7456 publications; 12 studies were included. Simulation-based CRM team training was associated with significant improvements in CRM skill acquisition in all but two studies when compared to didactic case-based CRM training or simulation without CRM training. Of the 12 included studies, one showed significant improvements in team behaviours in the workplace, while two studies demonstrated sustained reductions in adverse patient outcomes after a single simulation-based CRM team intervention. In conclusion, CRM simulation-based training for interprofessional and interdisciplinary teams show promise in teaching CRM in the simulator when compared to didactic case-based CRM education or simulation without CRM teaching. More research, however, is required to demonstrate transfer of learning to workplaces and potential impact on patient outcomes.
Garagnani, Max; Wennekers, Thomas; Pulvermüller, Friedemann
2008-01-01
Meaningful familiar stimuli and senseless unknown materials lead to different patterns of brain activation. A late major neurophysiological response indexing ‘sense’ is the negative component of event-related potential peaking at around 400 ms (N400), an event-related potential that emerges in attention-demanding tasks and is larger for senseless materials (e.g. meaningless pseudowords) than for matched meaningful stimuli (words). However, the mismatch negativity (latency 100–250 ms), an early automatic brain response elicited under distraction, is larger to words than to pseudowords, thus exhibiting the opposite pattern to that seen for the N400. So far, no theoretical account has been able to reconcile and explain these findings by means of a single, mechanistic neural model. We implemented a neuroanatomically grounded neural network model of the left perisylvian language cortex and simulated: (i) brain processes of early language acquisition and (ii) cortical responses to familiar word and senseless pseudoword stimuli. We found that variation of the area-specific inhibition (the model correlate of attention) modulated the simulated brain response to words and pseudowords, producing either an N400- or a mismatch negativity-like response depending on the amount of inhibition (i.e. available attentional resources). Our model: (i) provides a unifying explanatory account, at cortical level, of experimental observations that, so far, had not been given a coherent interpretation within a single framework; (ii) demonstrates the viability of purely Hebbian, associative learning in a multilayered neural network architecture; and (iii) makes clear predictions on the effects of attention on latency and magnitude of event-related potentials to lexical items. Such predictions have been confirmed by recent experimental evidence. PMID:18215243
Capturing flood-to-drought transitions in regional climate model simulations
NASA Astrophysics Data System (ADS)
Anders, Ivonne; Haslinger, Klaus; Hofstätter, Michael; Salzmann, Manuela; Resch, Gernot
2017-04-01
In previous studies atmospheric cyclones have been investigated in terms of related precipitation extremes in Central Europe. Mediterranean (Vb-like) cyclones are of special relevance as they are frequently related to high atmospheric moisture fluxes leading to floods and landslides in the Alpine region. Another focus in this area is on droughts, affecting soil moisture and surface and sub-surface runoff as well. Such events develop differently depending on available pre-saturation of water in the soil. In a first step we investigated two time periods which encompass a flood event and a subsequent drought on very different time scales, one long lasting transition (2002/2003) and a rather short one between May and August 2013. In a second step we extended the investigation to the long time period 1950-2016. We focused on high spatial and temporal scales and assessed the currently achievable accuracy in the simulation of the Vb-events on one hand and following drought events on the other hand. The state-of-the-art regional climate model CCLM is applied in hindcast-mode simulating the single events described above, but also the time from 1948 to 2016 to evaluate the results from the short runs to be valid for the long time period. Besides the conventional forcing of the regional climate model at its lateral boundaries, a spectral nudging technique is applied. The simulations covering the European domain have been varied systematically different model parameters. The resulting precipitation amounts have been compared to E-OBS gridded European precipitation data set and a recent high spatially resolved precipitation data set for Austria (GPARD-6). For the drought events the Standardized Precipitation Evapotranspiration Index (SPEI), soil moisture and runoff has been investigated. Varying the spectral nudging setup helps us to understand the 3D-processes during these events, but also to identify model deficiencies. To improve the simulation of such events in the past, improves also the ability to assess a climate change signal in the recent and far future.
NASA Astrophysics Data System (ADS)
Munoz-Arriola, F.; Torres-Alavez, J.; Mohamad Abadi, A.; Walko, R. L.
2014-12-01
Our goal is to investigate possible sources of predictability of hydrometeorological extreme events in the Northern High Plains. Hydrometeorological extreme events are considered the most costly natural phenomena. Water deficits and surpluses highlight how the water-climate interdependence becomes crucial in areas where single activities drive economies such as Agriculture in the NHP. Nonetheless we recognize the Water-Climate interdependence and the regulatory role that human activities play, we still grapple to identify what sources of predictability could be added to flood and drought forecasts. To identify the benefit of multi-scale climate modeling and the role of initial conditions on flood and drought predictability on the NHP, we use the Ocean Land Atmospheric Model (OLAM). OLAM is characterized by a dynamic core with a global geodesic grid with hexagonal (and variably refined) mesh cells and a finite volume discretization of the full compressible Navier Stokes equations, a cut-grid cell method for topography (that reduces error in computational gradient computation and anomalous vertical dispersion). Our hypothesis is that wet conditions will drive OLAM's simulations of precipitation to wetter conditions affecting both flood forecast and drought forecast. To test this hypothesis we simulate precipitation during identified historical flood events followed by drought events in the NHP (i.e. 2011-2012 years). We initialized OLAM with CFS-data 1-10 days previous to a flooding event (as initial conditions) to explore (1) short-term and high-resolution and (2) long-term and coarse-resolution simulations of flood and drought events, respectively. While floods are assessed during a maximum of 15-days refined-mesh simulations, drought is evaluated during the following 15 months. Simulated precipitation will be compared with the Sub-continental Observation Dataset, a gridded 1/16th degree resolution data obtained from climatological stations in Canada, US, and Mexico. This in-progress research will ultimately contribute to integrate OLAM and VIC models and improve predictability of extreme hydrometeorological events.
Simulating Chemical-Induced Injury Using Virtual Hepatic Tissues
Chemical-induced liver injury involves a dynamic sequence of events that span multiple levels of biological organization. Current methods for testing the toxicity of a single chemical can cost millions of dollars, take up to two years and sacrifice thousands of animals. It is dif...
Software resilience and the effectiveness of software mitigation in microcontrollers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Heather; Baker, Zachary; Fairbanks, Tom
Commercially available microprocessors could be useful to the space community for noncritical computations. There are many possible components that are smaller, lower-power, and less expensive than traditional radiation-hardened microprocessors. Many commercial microprocessors have issues with single-event effects (SEEs), such as single-event upsets (SEUs) and single-event transients (SETs), that can cause the microprocessor to calculate an incorrect result or crash. In this paper we present the Trikaya technique for masking SEUs and SETs through software mitigation techniques. Furthermore, test results show that this technique can be very effective at masking errors, making it possible to fly these microprocessors for a varietymore » of missions.« less
Software resilience and the effectiveness of software mitigation in microcontrollers
Quinn, Heather; Baker, Zachary; Fairbanks, Tom; ...
2015-12-01
Commercially available microprocessors could be useful to the space community for noncritical computations. There are many possible components that are smaller, lower-power, and less expensive than traditional radiation-hardened microprocessors. Many commercial microprocessors have issues with single-event effects (SEEs), such as single-event upsets (SEUs) and single-event transients (SETs), that can cause the microprocessor to calculate an incorrect result or crash. In this paper we present the Trikaya technique for masking SEUs and SETs through software mitigation techniques. Furthermore, test results show that this technique can be very effective at masking errors, making it possible to fly these microprocessors for a varietymore » of missions.« less
NASA Technical Reports Server (NTRS)
Lauenstein, Jean-Marie
2015-01-01
The JEDEC JESD57 test standard, Procedures for the Measurement of Single-Event Effects in Semiconductor Devices from Heavy-Ion Irradiation, is undergoing its first revision since 1996. In this talk, we place this test standard into context with other relevant radiation test standards to show its importance for single-event effect radiation testing for space applications. We show the range of industry, government, and end-user party involvement in the revision. Finally, we highlight some of the key changes being made and discuss the trade-space in which setting standards must be made to be both useful and broadly adopted.
Modeling Single-Event Transient Propagation in a SiGe BiCMOS Direct-Conversion Receiver
NASA Astrophysics Data System (ADS)
Ildefonso, Adrian; Song, Ickhyun; Tzintzarov, George N.; Fleetwood, Zachary E.; Lourenco, Nelson E.; Wachter, Mason T.; Cressler, John D.
2017-08-01
The propagation of single-event transient (SET) signals in a silicon-germanium direct-conversion receiver carrying modulated data is explored. A theoretical analysis of transient propagation, verified by simulation, is presented. A new methodology to characterize and quantify the impact of SETs in communication systems carrying modulated data is proposed. The proposed methodology uses a pulsed radiation source to induce distortions in the signal constellation. The error vector magnitude due to SETs can then be calculated to quantify errors. Two different modulation schemes were simulated: QPSK and 16-QAM. The distortions in the constellation diagram agree with the presented circuit theory. Furthermore, the proposed methodology was applied to evaluate the improvements in the SET response due to a known radiation-hardening-by-design (RHBD) technique, where the common-base device of the low-noise amplifier was operated in inverse mode. The proposed methodology can be a valid technique to determine the most sensitive parts of a system carrying modulated data.
Arrospide, Arantzazu; Rue, Montserrat; van Ravesteyn, Nicolien T; Comas, Merce; Larrañaga, Nerea; Sarriugarte, Garbiñe; Mar, Javier
2015-10-12
Since the breast cancer screening programme in the Basque Country (BCSPBC) was started in 1996, more than 400,000 women aged 50 to 69 years have been invited to participate. Based on epidemiological observations and simulation techniques it is possible to extend observed short term data into anticipated long term results. The aim of this study was to assess the effectiveness of the programme through 2011 by quantifying the outcomes in breast cancer mortality, life-years gained, false positive results, and overdiagnosis. A discrete event simulation model was constructed to reproduce the natural history of breast cancer (disease-free, pre-clinical, symptomatic, and disease-specific death) and the actual observed characteristics of the screening programme during the evaluated period in the Basque women population. Goodness-of-fit statistics were applied for model validation. The screening effects were measured as differences in benefits and harms between the screened and unscreened populations. Breast cancer mortality reduction and life-years gained were considered as screening benefits, whereas, overdiagnosis and false positive results were assessed as harms. Results for a single cohort were also obtained. The screening programme yielded a 16 % reduction in breast cancer mortality and a 10 % increase in the incidence of breast cancer through 2011. Almost 2 % of all the women in the programme had a false positive result during the evaluation period. When a single cohort was analysed, the number of deaths decreased by 13 %, and 4 % of screen-detected cancers were overdiagnosed. Each woman with BC detected by the screening programme gained 2.5 life years due to early detection corrected by lead time. Fifteen years after the screening programme started, this study supports an important decrease in breast cancer mortality due to the screening programme, with reasonable risk of overdiagnosis and false positive results, and sustains the continuation of the breast cancer screening programme in the Basque population.
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2017-01-01
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application and the performance that was achieved.
Binomial tau-leap spatial stochastic simulation algorithm for applications in chemical kinetics.
Marquez-Lago, Tatiana T; Burrage, Kevin
2007-09-14
In cell biology, cell signaling pathway problems are often tackled with deterministic temporal models, well mixed stochastic simulators, and/or hybrid methods. But, in fact, three dimensional stochastic spatial modeling of reactions happening inside the cell is needed in order to fully understand these cell signaling pathways. This is because noise effects, low molecular concentrations, and spatial heterogeneity can all affect the cellular dynamics. However, there are ways in which important effects can be accounted without going to the extent of using highly resolved spatial simulators (such as single-particle software), hence reducing the overall computation time significantly. We present a new coarse grained modified version of the next subvolume method that allows the user to consider both diffusion and reaction events in relatively long simulation time spans as compared with the original method and other commonly used fully stochastic computational methods. Benchmarking of the simulation algorithm was performed through comparison with the next subvolume method and well mixed models (MATLAB), as well as stochastic particle reaction and transport simulations (CHEMCELL, Sandia National Laboratories). Additionally, we construct a model based on a set of chemical reactions in the epidermal growth factor receptor pathway. For this particular application and a bistable chemical system example, we analyze and outline the advantages of our presented binomial tau-leap spatial stochastic simulation algorithm, in terms of efficiency and accuracy, in scenarios of both molecular homogeneity and heterogeneity.
Modelling hydrology of a single bioretention system with HYDRUS-1D.
Meng, Yingying; Wang, Huixiao; Chen, Jiangang; Zhang, Shuhan
2014-01-01
A study was carried out on the effectiveness of bioretention systems to abate stormwater using computer simulation. The hydrologic performance was simulated for two bioretention cells using HYDRUS-1D, and the simulation results were verified by field data of nearly four years. Using the validated model, the optimization of design parameters of rainfall return period, filter media depth and type, and surface area was discussed. And the annual hydrologic performance of bioretention systems was further analyzed under the optimized parameters. The study reveals that bioretention systems with underdrains and impervious boundaries do have some detention capability, while their total water retention capability is extremely limited. Better detention capability is noted for smaller rainfall events, deeper filter media, and design storms with a return period smaller than 2 years, and a cost-effective filter media depth is recommended in bioretention design. Better hydrologic effectiveness is achieved with a higher hydraulic conductivity and ratio of the bioretention surface area to the catchment area, and filter media whose conductivity is between the conductivity of loamy sand and sandy loam, and a surface area of 10% of the catchment area is recommended. In the long-term simulation, both infiltration volume and evapotranspiration are critical for the total rainfall treatment in bioretention systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, C.; et al.
The single-phase liquid argon time projection chamber (LArTPC) provides a large amount of detailed information in the form of fine-grained drifted ionization charge from particle traces. To fully utilize this information, the deposited charge must be accurately extracted from the raw digitized waveforms via a robust signal processing chain. Enabled by the ultra-low noise levels associated with cryogenic electronics in the MicroBooNE detector, the precise extraction of ionization charge from the induction wire planes in a single-phase LArTPC is qualitatively demonstrated on MicroBooNE data with event display images, and quantitatively demonstrated via waveform-level and track-level metrics. Improved performance of inductionmore » plane calorimetry is demonstrated through the agreement of extracted ionization charge measurements across different wire planes for various event topologies. In addition to the comprehensive waveform-level comparison of data and simulation, a calibration of the cryogenic electronics response is presented and solutions to various MicroBooNE-specific TPC issues are discussed. This work presents an important improvement in LArTPC signal processing, the foundation of reconstruction and therefore physics analyses in MicroBooNE.« less
Measurement of the single π0 production rate in neutral current neutrino interactions on water
NASA Astrophysics Data System (ADS)
Abe, K.; Amey, J.; Andreopoulos, C.; Antonova, M.; Aoki, S.; Ariga, A.; Ashida, Y.; Assylbekov, S.; Autiero, D.; Ban, S.; Barbi, M.; Barker, G. J.; Barr, G.; Barry, C.; Bartet-Friburg, P.; Batkiewicz, M.; Berardi, V.; Berkman, S.; Bhadra, S.; Bienstock, S.; Blondel, A.; Bolognesi, S.; Bordoni, S.; Boyd, S. B.; Brailsford, D.; Bravar, A.; Bronner, C.; Buizza Avanzini, M.; Calland, R. G.; Campbell, T.; Cao, S.; Cartwright, S. L.; Castillo, R.; Catanesi, M. G.; Cervera, A.; Chappell, A.; Checchia, C.; Cherdack, D.; Chikuma, N.; Christodoulou, G.; Clifton, A.; Coleman, J.; Collazuol, G.; Coplowe, D.; Cremonesi, L.; Cudd, A.; Dabrowska, A.; De Rosa, G.; Dealtry, T.; Denner, P. F.; Dennis, S. R.; Densham, C.; Dewhurst, D.; Di Lodovico, F.; Di Luise, S.; Dolan, S.; Drapier, O.; Duffy, K. E.; Dumarchez, J.; Dunkman, M.; Dunne, P.; Dziewiecki, M.; Emery-Schrenk, S.; Ereditato, A.; Feusels, T.; Finch, A. J.; Fiorentini, G. A.; Friend, M.; Fujii, Y.; Fukuda, D.; Fukuda, Y.; Furmanski, A. P.; Galymov, V.; Garcia, A.; Giffin, S. G.; Giganti, C.; Gilje, K.; Gizzarelli, F.; Golan, T.; Gonin, M.; Grant, N.; Hadley, D. R.; Haegel, L.; Haigh, J. T.; Hamilton, P.; Hansen, D.; Harada, J.; Hara, T.; Hartz, M.; Hasegawa, T.; Hastings, N. C.; Hayashino, T.; Hayato, Y.; Helmer, R. L.; Hierholzer, M.; Hillairet, A.; Himmel, A.; Hiraki, T.; Hiramoto, A.; Hirota, S.; Hogan, M.; Holeczek, J.; Hosomi, F.; Huang, K.; Ichikawa, A. K.; Ieki, K.; Ikeda, M.; Imber, J.; Insler, J.; Intonti, R. A.; Irvine, T. J.; Ishida, T.; Ishii, T.; Iwai, E.; Iwamoto, K.; Izmaylov, A.; Jacob, A.; Jamieson, B.; Jiang, M.; Johnson, S.; Jo, J. H.; Jonsson, P.; Jung, C. K.; Kabirnezhad, M.; Kaboth, A. C.; Kajita, T.; Kakuno, H.; Kameda, J.; Karlen, D.; Karpikov, I.; Katori, T.; Kearns, E.; Khabibullin, M.; Khotjantsev, A.; Kielczewska, D.; Kikawa, T.; Kim, H.; Kim, J.; King, S.; Kisiel, J.; Knight, A.; Knox, A.; Kobayashi, T.; Koch, L.; Koga, T.; Koller, P. P.; Konaka, A.; Kondo, K.; Kopylov, A.; Kormos, L. L.; Korzenev, A.; Koshio, Y.; Kowalik, K.; Kropp, W.; Kudenko, Y.; Kurjata, R.; Kutter, T.; Lagoda, J.; Lamont, I.; Lamoureux, M.; Larkin, E.; Lasorak, P.; Laveder, M.; Lawe, M.; Lazos, M.; Licciardi, M.; Lindner, T.; Liptak, Z. J.; Litchfield, R. P.; Li, X.; Longhin, A.; Lopez, J. P.; Lou, T.; Ludovici, L.; Lu, X.; Magaletti, L.; Mahn, K.; Malek, M.; Manly, S.; Maret, L.; Marino, A. D.; Marteau, J.; Martin, J. F.; Martins, P.; Martynenko, S.; Maruyama, T.; Matveev, V.; Mavrokoridis, K.; Ma, W. Y.; Mazzucato, E.; McCarthy, M.; McCauley, N.; McFarland, K. S.; McGrew, C.; Mefodiev, A.; Metelko, C.; Mezzetto, M.; Mijakowski, P.; Minamino, A.; Mineev, O.; Mine, S.; Missert, A.; Miura, M.; Moriyama, S.; Morrison, J.; Mueller, Th. A.; Murphy, S.; Myslik, J.; Nakadaira, T.; Nakahata, M.; Nakamura, K. G.; Nakamura, K.; Nakamura, K. D.; Nakanishi, Y.; Nakayama, S.; Nakaya, T.; Nakayoshi, K.; Nantais, C.; Nielsen, C.; Nirkko, M.; Nishikawa, K.; Nishimura, Y.; Novella, P.; Nowak, J.; O'Keeffe, H. M.; Ohta, R.; Okumura, K.; Okusawa, T.; Oryszczak, W.; Oser, S. M.; Ovsyannikova, T.; Owen, R. A.; Oyama, Y.; Palladino, V.; Palomino, J. L.; Paolone, V.; Patel, N. D.; Paudyal, P.; Pavin, M.; Payne, D.; Perkin, J. D.; Petrov, Y.; Pickard, L.; Pickering, L.; Pinzon Guerra, E. S.; Pistillo, C.; Popov, B.; Posiadala-Zezula, M.; Poutissou, J.-M.; Poutissou, R.; Pritchard, A.; Przewlocki, P.; Quilain, B.; Radermacher, T.; Radicioni, E.; Ratoff, P. N.; Ravonel, M.; Rayner, M. A.; Redij, A.; Reinherz-Aronis, E.; Riccio, C.; Rojas, P.; Rondio, E.; Rossi, B.; Roth, S.; Rubbia, A.; Ruggeri, A. C.; Rychter, A.; Sacco, R.; Sakashita, K.; Sánchez, F.; Sato, F.; Scantamburlo, E.; Scholberg, K.; Schwehr, J.; Scott, M.; Seiya, Y.; Sekiguchi, T.; Sekiya, H.; Sgalaberna, D.; Shah, R.; Shaikhiev, A.; Shaker, F.; Shaw, D.; Shiozawa, M.; Shirahige, T.; Short, S.; Smy, M.; Sobczyk, J. T.; Sobel, H.; Sorel, M.; Southwell, L.; Stamoulis, P.; Steinmann, J.; Stewart, T.; Stowell, P.; Suda, Y.; Suvorov, S.; Suzuki, A.; Suzuki, K.; Suzuki, S. Y.; Suzuki, Y.; Tacik, R.; Tada, M.; Takahashi, S.; Takeda, A.; Takeuchi, Y.; Tamura, R.; Tanaka, H. K.; Tanaka, H. A.; Terhorst, D.; Terri, R.; Thakore, T.; Thompson, L. F.; Tobayama, S.; Toki, W.; Tomura, T.; Touramanis, C.; Tsukamoto, T.; Tzanov, M.; Uchida, Y.; Vacheret, A.; Vagins, M.; Vallari, Z.; Vasseur, G.; Vilela, C.; Vladisavljevic, T.; Wachala, T.; Wakamatsu, K.; Walter, C. W.; Wark, D.; Warzycha, W.; Wascko, M. O.; Weber, A.; Wendell, R.; Wilkes, R. J.; Wilking, M. J.; Wilkinson, C.; Wilson, J. R.; Wilson, R. J.; Wret, C.; Yamada, Y.; Yamamoto, K.; Yamamoto, M.; Yanagisawa, C.; Yano, T.; Yen, S.; Yershov, N.; Yokoyama, M.; Yoo, J.; Yoshida, K.; Yuan, T.; Yu, M.; Zalewska, A.; Zalipska, J.; Zambelli, L.; Zaremba, K.; Ziembicki, M.; Zimmerman, E. D.; Zito, M.; Żmuda, J.; T2K Collaboration
2018-02-01
The single π0 production rate in neutral current neutrino interactions on water in a neutrino beam with a peak neutrino energy of 0.6 GeV has been measured using the PØD, one of the subdetectors of the T2K near detector. The production rate was measured for data taking periods when the PØD contained water (2.64 ×1020 protons-on-target) and also periods without water (3.49 ×1020 protons-on-target). A measurement of the neutral current single π0 production rate on water is made using appropriate subtraction of the production rate with water in from the rate with water out of the target region. The subtraction analysis yields 106 ±41 ±69 signal events where the uncertainties are statistical (stat.) and systematic (sys.) respectively. This is consistent with the prediction of 157 events from the nominal simulation. The measured to expected ratio is 0.68 ±0.26 (stat ) ±0.44 (sys ) ±0.12 (flux ) . The nominal simulation uses a flux integrated cross section of 7.63 ×10-39 cm2 per nucleon with an average neutrino interaction energy of 1.3 GeV.
Destructive Single-Event Effects in Diodes
NASA Technical Reports Server (NTRS)
Casey, Megan C.; Lauenstein, Jean-Marie; Campola, Michael J.; Wilcox, Edward P.; Phan, Anthony M.; Label, Kenneth A.
2017-01-01
In this work, we discuss the observed single-event effects in a variety of types of diodes. In addition, we conduct failure analysis on several Schottky diodes that were heavy-ion irradiated. High- and low-magnitude optical microscope images, infrared camera images, and scanning electron microscope images are used to identify and describe the failure locations.
Effects of cosmic rays on single event upsets
NASA Technical Reports Server (NTRS)
Lowe, Calvin W.; Oladipupo, Adebisi O.; Venable, Demetrius D.
1988-01-01
The efforts at establishing a research program in space radiation effects are discussed. The research program has served as the basis for training several graduate students in an area of research that is of importance to NASA. In addition, technical support was provided for the Single Event Facility Group at Brookhaven National Laboratory.
Single Event Effects (SEE) for Power Metal-Oxide-Semiconductor Field-Effect Transistors (MOSFETs)
NASA Technical Reports Server (NTRS)
Lauenstein, Jean-Marie
2011-01-01
Single-event gate rupture (SEGR) continues to be a key failure mode in power MOSFETs. (1) SEGR is complex, making rate prediction difficult SEGR mechanism has two main components: (1) Oxide damage-- Reduces field required for rupture (2) Epilayer response -- Creates transient high field across the oxide.
NEPP Update of Independent Single Event Upset Field Programmable Gate Array Testing
NASA Technical Reports Server (NTRS)
Berg, Melanie; Label, Kenneth; Campola, Michael; Pellish, Jonathan
2017-01-01
This presentation provides a NASA Electronic Parts and Packaging (NEPP) Program update of independent Single Event Upset (SEU) Field Programmable Gate Array (FPGA) testing including FPGA test guidelines, Microsemi RTG4 heavy-ion results, Xilinx Kintex-UltraScale heavy-ion results, Xilinx UltraScale+ single event effect (SEE) test plans, development of a new methodology for characterizing SEU system response, and NEPP involvement with FPGA security and trust.
NASA Technical Reports Server (NTRS)
Scheick, Leif
2011-01-01
Single-event-effect test results for hi-rel total-dose-hardened power MOSFETs are presented in this report. TheSCF9550 from Semicoa and the IRHM57260SE from International Rectifier were tested to NASA test condition/standards and requirements.The IRHM57260SE performed much better when compared to previous testing. These initial results confirm that parts from the Temecula line are marginally comparable to the El Segundo line. The SCF9550 from Semicoa was also tested and represents the initial parts offering from this vendor. Both parts experienced single-event gate rupture (SEGR) and single-event burnout (SEB). All of the SEGR was from gate to drain.
Applying the WRF Double-Moment Six-Class Microphysics Scheme in the GRAPES_Meso Model: A Case Study
NASA Astrophysics Data System (ADS)
Zhang, Meng; Wang, Hong; Zhang, Xiaoye; Peng, Yue; Che, Huizheng
2018-04-01
This study incorporated the Weather Research and Forecasting (WRF) model double-moment 6-class (WDM6) microphysics scheme into the mesoscale version of the Global/Regional Assimilation and PrEdiction System (GRAPES_Meso). A rainfall event that occurred during 3-5 June 2015 around Beijing was simulated by using the WDM6, the WRF single-moment 6-class scheme (WSM6), and the NCEP 5-class scheme, respectively. The results show that both the distribution and magnitude of the rainfall simulated with WDM6 were more consistent with the observation. Compared with WDM6, WSM6 simulated larger cloud liquid water content, which provided more water vapor for graupel growth, leading to increased precipitation in the cold-rain processes. For areas with the warmrain processes, the sensitivity experiments using WDM6 showed that an increase in cloud condensation nuclei (CCN) number concentration led to enhanced CCN activation ratio and larger cloud droplet number concentration ( N c) but decreased cloud droplet effective diameter. The formation of more small-size cloud droplets resulted in a decrease in raindrop number concentration ( N r), inhibiting the warm-rain processes, thus gradually decreasing the amount of precipitation. For areas mainly with the cold-rain processes, the overall amount of precipitation increased; however, it gradually decreased when the CCN number concentration reached a certain magnitude. Hence, the effect of CCN number concentration on precipitation exhibits significant differences in different rainfall areas of the same precipitation event.
An Overview of Grain Growth Theories for Pure Single Phase Systems,
1986-10-01
the fundamental causes for these distributions. This Blanc and Mocellin (1979) and Carnal and Mocellin (1981j set out to do. 7.1 Monte-Carlo Simulations...termed event B) (in 2-D) of 3-sided grains. (2) Neighbour-switching (termed event C). Blanc and Mocellin (1979) dealt with 2-D sections through...Kurtz and Carpay (1980a). 7.2 Analytical Method to Obtain fn Carnal and Mocellin (1981) obtained the distribution of grain coordination numbers in
Simulation Of A Photofission-Based Cargo Interrogation System
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Michael; Gozani, Tsahi; Stevenson, John
A comprehensive model has been developed to characterize and optimize the detection of Bremsstrahlung x-ray induced fission signatures from nuclear materials hidden in cargo containers. An effective active interrogation system should not only induce a large number of fission events but also efficiently detect their signatures. The proposed scanning system utilizes a 9-MV commercially available linear accelerator and the detection of strong fission signals i.e. delayed gamma rays and prompt neutrons. Because the scanning system is complex and the cargo containers are large and often highly attenuating, the simulation method segments the model into several physical steps, representing each changemore » of radiation particle. Each approximation is carried-out separately, resulting in a major reduction in computational time and a significant improvement in tally statistics. The model investigates the effect on the fission rate and detection rate by various cargo types, densities and distributions. Hydrogenous and metallic cargos, homogeneous and heterogeneous, as well as various locations of the nuclear material inside the cargo container were studied. We will show that for the photofission-based interrogation system simulation, the final results are not only in good agreement with a full, single-step simulation but also with experimental results, further validating the full-system simulation.« less
Modeling Large Scale Circuits Using Massively Parallel Descrete-Event Simulation
2013-06-01
exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power consumption...grow to exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power...Warp Speed 10.0. 2.0 INTRODUCTION As supercomputer systems approach exascale , the core count will exceed 1024 and number of transistors used in
Layout-aware simulation of soft errors in sub-100 nm integrated circuits
NASA Astrophysics Data System (ADS)
Balbekov, A.; Gorbunov, M.; Bobkov, S.
2016-12-01
Single Event Transient (SET) caused by charged particle traveling through the sensitive volume of integral circuit (IC) may lead to different errors in digital circuits in some cases. In technologies below 180 nm, a single particle can affect multiple devices causing multiple SET. This fact adds the complexity to fault tolerant devices design, because the schematic design techniques become useless without their layout consideration. The most common layout mitigation technique is a spatial separation of sensitive nodes of hardened circuits. Spatial separation decreases the circuit performance and increases power consumption. Spacing should thus be reasonable and its scaling follows the device dimensions' scaling trend. This paper presents the development of the SET simulation approach comprised of SPICE simulation with "double exponent" current source as SET model. The technique uses layout in GDSII format to locate nearby devices that can be affected by a single particle and that can share the generated charge. The developed software tool automatizes multiple simulations and gathers the produced data to present it as the sensitivity map. The examples of conducted simulations of fault tolerant cells and their sensitivity maps are presented in this paper.
NASA Astrophysics Data System (ADS)
Buaria, Dhawal; Yeung, P. K.; Sawford, B. L.
2016-11-01
An efficient massively parallel algorithm has allowed us to obtain the trajectories of 300 million fluid particles in an 81923 simulation of isotropic turbulence at Taylor-scale Reynolds number 1300. Conditional single-particle statistics are used to investigate the effect of extreme events in dissipation and enstrophy on turbulent dispersion. The statistics of pairs and tetrads, both forward and backward in time, are obtained via post-processing of single-particle trajectories. For tetrads, since memory of shape is known to be short, we focus, for convenience, on samples which are initially regular, with all sides of comparable length. The statistics of tetrad size show similar behavior as the two-particle relative dispersion, i.e., stronger backward dispersion at intermediate times with larger backward Richardson constant. In contrast, the statistics of tetrad shape show more robust inertial range scaling, in both forward and backward frames. However, the distortion of shape is stronger for backward dispersion. Our results suggest that the Reynolds number reached in this work is sufficient to settle some long-standing questions concerning Lagrangian scale similarity. Supported by NSF Grants CBET-1235906 and ACI-1036170.
Destructive Single-Event Failures in Schottky Diodes
NASA Technical Reports Server (NTRS)
Casey, Megan C.; Lauenstein, Jean-Marie; Gigliuto, Robert A.; Wilcox, Edward P.; Phan, Anthony M.; Kim, Hak; Chen, Dakai; LaBel, Kenneth A.
2014-01-01
This presentation contains test results for destructive failures in DC-DC converters. We have shown that Schottky diodes are susceptible to destructive single-event effects. Future work will be completed to identify parameter that determines diode susceptibility.
An Improved SEL Test of the ADV212 Video Codec
NASA Technical Reports Server (NTRS)
Wilcox, Edward P.; Campola, Michael J.; Nadendla, Seshagiri; Kadari, Madhusudhan; Gigliuto, Robert A.
2017-01-01
Single-event effect (SEE) test data is presented on the Analog Devices ADV212. Focus is given to the test setup used to improve data quality and validate single-event latch-up (SEL) protection circuitry.
An Improved SEL Test of the ADV212 Video Codec
NASA Technical Reports Server (NTRS)
Wilcox, Edward P; Campola, Michael J.; Nadendla, Seshagiri; Kadari, Madhusudhan; Gigliuto, Robert A.
2017-01-01
Single-event effect (SEE) test data is presented on the Analog Devices ADV212. Focus is given to the test setup used to improve data quality and validate single-event latchup (SEL) protection circuitry.
Adaptive temperature-accelerated dynamics
NASA Astrophysics Data System (ADS)
Shim, Yunsic; Amar, Jacques G.
2011-02-01
We present three adaptive methods for optimizing the high temperature Thigh on-the-fly in temperature-accelerated dynamics (TAD) simulations. In all three methods, the high temperature is adjusted periodically in order to maximize the performance. While in the first two methods the adjustment depends on the number of observed events, the third method depends on the minimum activation barrier observed so far and requires an a priori knowledge of the optimal high temperature T^{opt}_{high}(E_a) as a function of the activation barrier Ea for each accepted event. In order to determine the functional form of T^{opt}_{high}(E_a), we have carried out extensive simulations of submonolayer annealing on the (100) surface for a variety of metals (Ag, Cu, Ni, Pd, and Au). While the results for all five metals are different, when they are scaled with the melting temperature Tm, we find that they all lie on a single scaling curve. Similar results have also been obtained for (111) surfaces although in this case the scaling function is slightly different. In order to test the performance of all three methods, we have also carried out adaptive TAD simulations of Ag/Ag(100) annealing and growth at T = 80 K and compared with fixed high-temperature TAD simulations for different values of Thigh. We find that the performance of all three adaptive methods is typically as good as or better than that obtained in fixed high-temperature TAD simulations carried out using the effective optimal fixed high temperature. In addition, we find that the final high temperatures obtained in our adaptive TAD simulations are very close to our results for T^{opt}_{high}(E_a). The applicability of the adaptive methods to a variety of TAD simulations is also briefly discussed.
Generating survival times to simulate Cox proportional hazards models with time-varying covariates.
Austin, Peter C
2012-12-20
Simulations and Monte Carlo methods serve an important role in modern statistical research. They allow for an examination of the performance of statistical procedures in settings in which analytic and mathematical derivations may not be feasible. A key element in any statistical simulation is the existence of an appropriate data-generating process: one must be able to simulate data from a specified statistical model. We describe data-generating processes for the Cox proportional hazards model with time-varying covariates when event times follow an exponential, Weibull, or Gompertz distribution. We consider three types of time-varying covariates: first, a dichotomous time-varying covariate that can change at most once from untreated to treated (e.g., organ transplant); second, a continuous time-varying covariate such as cumulative exposure at a constant dose to radiation or to a pharmaceutical agent used for a chronic condition; third, a dichotomous time-varying covariate with a subject being able to move repeatedly between treatment states (e.g., current compliance or use of a medication). In each setting, we derive closed-form expressions that allow one to simulate survival times so that survival times are related to a vector of fixed or time-invariant covariates and to a single time-varying covariate. We illustrate the utility of our closed-form expressions for simulating event times by using Monte Carlo simulations to estimate the statistical power to detect as statistically significant the effect of different types of binary time-varying covariates. This is compared with the statistical power to detect as statistically significant a binary time-invariant covariate. Copyright © 2012 John Wiley & Sons, Ltd.
Desktop Modeling and Simulation: Parsimonious, yet Effective Discrete-Event Simulation Analysis
NASA Technical Reports Server (NTRS)
Bradley, James R.
2012-01-01
This paper evaluates how quickly students can be trained to construct useful discrete-event simulation models using Excel The typical supply chain used by many large national retailers is described, and an Excel-based simulation model is constructed of it The set of programming and simulation skills required for development of that model are then determined we conclude that six hours of training are required to teach the skills to MBA students . The simulation presented here contains all fundamental functionallty of a simulation model, and so our result holds for any discrete-event simulation model. We argue therefore that Industry workers with the same technical skill set as students having completed one year in an MBA program can be quickly trained to construct simulation models. This result gives credence to the efficacy of Desktop Modeling and Simulation whereby simulation analyses can be quickly developed, run, and analyzed with widely available software, namely Excel.
United States Marine Corps Motor Transport Mechanic-to-Equipment Ratio
time motor transport equipment remains in maintenance at the organizational command level. This thesis uses a discrete event simulation model of the...applied to a single experiment that allows for assessment of risk of not achieving the objective. Inter-arrival time, processing time, work schedule
NASA Astrophysics Data System (ADS)
Zhang, Rong-Hua; Tao, Ling-Jiang; Gao, Chuan
2017-09-01
Large uncertainties exist in real-time predictions of the 2015 El Niño event, which have systematic intensity biases that are strongly model-dependent. It is critically important to characterize those model biases so they can be reduced appropriately. In this study, the conditional nonlinear optimal perturbation (CNOP)-based approach was applied to an intermediate coupled model (ICM) equipped with a four-dimensional variational data assimilation technique. The CNOP-based approach was used to quantify prediction errors that can be attributed to initial conditions (ICs) and model parameters (MPs). Two key MPs were considered in the ICM: one represents the intensity of the thermocline effect, and the other represents the relative coupling intensity between the ocean and atmosphere. Two experiments were performed to illustrate the effects of error corrections, one with a standard simulation and another with an optimized simulation in which errors in the ICs and MPs derived from the CNOP-based approach were optimally corrected. The results indicate that simulations of the 2015 El Niño event can be effectively improved by using CNOP-derived error correcting. In particular, the El Niño intensity in late 2015 was adequately captured when simulations were started from early 2015. Quantitatively, the Niño3.4 SST index simulated in Dec. 2015 increased to 2.8 °C in the optimized simulation, compared with only 1.5 °C in the standard simulation. The feasibility and effectiveness of using the CNOP-based technique to improve ENSO simulations are demonstrated in the context of the 2015 El Niño event. The limitations and further applications are also discussed.
Wang, Jun; Yi, Si; Li, Mengya; Wang, Lei; Song, Chengcheng
2018-04-15
We compared the effects of three key environmental factors of coastal flooding: sea level rise (SLR), land subsidence (LS) and bathymetric change (BC) in the coastal areas of Shanghai. We use the hydrological simulation model MIKE 21 to simulate flood magnitudes under multiple scenarios created from combinations of the key environmental factors projected to year 2030 and 2050. Historical typhoons (TC9711, TC8114, TC0012, TC0205 and TC1109), which caused extremely high surges and considerable losses, were selected as reference tracks to generate potential typhoon events that would make landfalls in Shanghai (SHLD), in the north of Zhejiang (ZNLD) and moving northwards in the offshore area of Shanghai (MNS) under those scenarios. The model results provided assessment of impact of single and compound effects of the three factors (SLR, LS and BC) on coastal flooding in Shanghai for the next few decades. Model simulation showed that by the year 2030, the magnitude of storm flooding will increase due to the environmental changes defined by SLR, LS, and BC. Particularly, the compound scenario of the three factors will generate coastal floods that are 3.1, 2.7, and 1.9 times greater than the single factor change scenarios by, respectively, SLR, LS, and BC. Even more drastically, in 2050, the compound impact of the three factors would be 8.5, 7.5, and 23.4 times of the single factors. It indicates that the impact of environmental changes is not simple addition of the effects from individual factors, but rather multiple times greater of that when the projection time is longer. We also found for short-term scenarios, the bathymetry change is the most important factor for the changes in coastal flooding; and for long-term scenarios, sea level rise and land subsidence are the major factors that coastal flood prevention and management should address. Copyright © 2017 Elsevier B.V. All rights reserved.
Sudell, Maria; Tudur Smith, Catrin; Gueyffier, François; Kolamunnage-Dona, Ruwanthi
2018-04-15
Joint modelling of longitudinal and time-to-event data is often preferred over separate longitudinal or time-to-event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time-to-event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta-analysis of joint model estimates from multiple studies. We propose a 2-stage method for meta-analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta-analyses of separate longitudinal or time-to-event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta-analytic setting where association exists between the longitudinal and time-to-event outcomes. Where evidence of association between longitudinal and time-to-event outcomes exists, results from joint models over standalone analyses should be pooled in 2-stage meta-analyses. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Scavenging and recombination kinetics in a radiation spur: The successive ordered scavenging events
NASA Astrophysics Data System (ADS)
Al-Samra, Eyad H.; Green, Nicholas J. B.
2018-03-01
This study describes stochastic models to investigate the successive ordered scavenging events in a spur of four radicals, a model system based on a radiation spur. Three simulation models have been developed to obtain the probabilities of the ordered scavenging events: (i) a Monte Carlo random flight (RF) model, (ii) hybrid simulations in which the reaction rate coefficient is used to generate scavenging times for the radicals and (iii) the independent reaction times (IRT) method. The results of these simulations are found to be in agreement with one another. In addition, a detailed master equation treatment is also presented, and used to extract simulated rate coefficients of the ordered scavenging reactions from the RF simulations. These rate coefficients are transient, the rate coefficients obtained for subsequent reactions are effectively equal, and in reasonable agreement with the simple correction for competition effects that has recently been proposed.
Exploring the content and quality of episodic future simulations in semantic dementia.
Irish, Muireann; Addis, Donna Rose; Hodges, John R; Piguet, Olivier
2012-12-01
Semantic dementia (SD) is a progressive neurodegenerative disorder characterised by the amodal loss of semantic knowledge in the context of relatively preserved recent episodic memory. Recent studies have demonstrated that despite relatively intact episodic memory the capacity for future simulation in SD is profoundly impaired, resulting in an asymmetric profile where past retrieval is significantly better than future simulation (referred to as a past>future effect). Here, we sought to identify the origins of this asymmetric profile by conducting a fine-grained analysis of the contextual details provided during past retrieval and future simulation in SD. Participants with SD (n=14), Alzheimer's disease (n=11), and healthy controls (n=14) had previously completed an experimental past-future interview in which they generated three past events from the previous year, and three future events in the next year, and provided subjective qualitative ratings of vividness, emotional valence, emotional intensity, task difficulty, and personal significance for each event described. Our results confirmed the striking impairment for future simulation in SD, despite a relative preservation of past episodic retrieval. Examination of the contextual details provided for past memories and future simulations revealed significant impairments irrespective of contextual detail type for future simulations in SD, and demonstrated that the future thinking deficit in this cohort was driven by a marked decline in the provision of internal (episodic) event details. In contrast with this past>future effect for internal event details, SD patients displayed a future>past effect for external (non-episodic) event details. Analyses of the qualitative ratings provided for past and future events indicated that SD patients' phenomenological experience did not differ between temporal conditions. Our findings underscore the fact that successful extraction of episodic elements from the past is not sufficient for the generation of novel future simulations in SD. The notable disconnect between objective task performance and patients' subjective experience during future simulation likely reflects the tendency of SD patients to recast entire past events into the future condition. Accordingly, the familiarity of the recapitulated details results in similar ratings of vividness and emotionality across temporal conditions, despite marked differences in the richness of contextual details as the patient moves from the past to the future. Copyright © 2012 Elsevier Ltd. All rights reserved.
The Gypsy Moth Event Monitor for FVS: a tool for forest and pest managers
Kurt W. Gottschalk; Anthony W. Courter
2007-01-01
The Gypsy Moth Event Monitor is a program that simulates the effects of gypsy moth, Lymantria dispar (L.), within the confines of the Forest Vegetation Simulator (FVS). Individual stands are evaluated with a susceptibility index system to determine the vulnerability of the stand to the effects of gypsy moth. A gypsy moth outbreak is scheduled in the...
Single-Nanoparticle Photoelectrochemistry at a Nanoparticulate TiO2 -Filmed Ultramicroelectrode.
Peng, Yue-Yi; Ma, Hui; Ma, Wei; Long, Yi-Tao; Tian, He
2018-03-26
An ultrasensitive photoelectrochemical method for achieving real-time detection of single nanoparticle collision events is presented. Using a micrometer-thick nanoparticulate TiO 2 -filmed Au ultra-microelectrode (TiO 2 @Au UME), a sub-millisecond photocurrent transient was observed for an individual N719-tagged TiO 2 (N719@TiO 2 ) nanoparticle and is due to the instantaneous collision process. Owing to a trap-limited electron diffusion process as the rate-limiting step, a random three-dimensional diffusion model was developed to simulate electron transport dynamics in TiO 2 film. The combination of theoretical simulation and high-resolution photocurrent measurement allow electron-transfer information of a single N719@TiO 2 nanoparticle to be quantified at single-molecule accuracy and the electron diffusivity and the electron-collection efficiency of TiO 2 @Au UME to be estimated. This method provides a test for studies of photoinduced electron transfer at the single-nanoparticle level. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Allen, Gregory
2011-01-01
The NEPP Reconfigurable Field-Programmable Gate Array (FPGA) task has been charged to evaluate reconfigurable FPGA technologies for use in space. Under this task, the Xilinx single-event-immune, reconfigurable FPGA (SIRF) XQR5VFX130 device was evaluated for SEE. Additionally, the Altera Stratix-IV and SiliconBlue iCE65 were screened for single-event latchup (SEL).
NASA Astrophysics Data System (ADS)
Lapusta, N.; Liu, Y.
2007-12-01
Heterogeneity in fault properties can have significant effect on dynamic rupture propagation and aseismic slip. It is often assumed that a fixed heterogeneity would have similar effect on fault slip throughout the slip history. We investigate dynamic rupture interaction with a fault patch of higher normal stress over several earthquake cycles in a three-dimensional model. We find that the influence of the heterogeneity on dynamic events has significant variation and depends on prior slip history. We consider a planar strike-slip fault governed by rate and state friction and driven by slow tectonic loading on deeper extension of the fault. The 30 km by 12 km velocity-weakening region, which is potentially seismogenic, is surrounded by steady-state velocity-strengthening region. The normal stress is constant over the fault, except in a circular patch of 2 km in diameter located in the seismogenic region, where normal stress is higher than on the rest of the fault. Our simulations employ the methodology developed by Lapusta and Liu (AGU, 2006), which is able to resolve both dynamic and quasi-static stages of spontaneous slip accumulation in a single computational procedure. The initial shear stress is constant on the fault, except in a small area where it is higher and where the first large dynamic event initiates. For patches with 20%, 40%, 60% higher normal stress, the first event has significant dynamic interaction with the patch, creating a rupture speed decrease followed by a supershear burst and larger slip around the patch. Hence, in the first event, the patch acts as a seismic asperity. For the case of 100% higher stress, the rupture is not able to break the patch in the first event. In subsequent dynamic events, the behavior depends on the strength of heterogeneity. For the patch with 20% higher normal stress, dynamic rupture in subsequent events propagates through the patch without any noticeable perturbation in rupture speed or slip. In particular, supershear propagation and additional slip accumulation around the patch are never repeated in the simulated history of the fault, and the patch stops manifesting itself as a seismic asperity. This is due to higher shear stress that is established at the patch after the first earthquake cycle. For patches with higher normal stress, shear stress redistribution also occurs, but it is less effective. The patches with 40% and 60% higher normal stress continue to affect rupture speed and fault slip in some of subsequent events, although the effect is much diminished with respect to the first event. For example, there are no supershear bursts. The patch with 100% higher normal stress is first broken in the second large event, and it retains significant influence on rupture speed and slip throughout the fault history, occasionally resulting in supershear bursts. Additional slip complexity emerges for patches with 40% and higher normal stress contrast. Since higher normal stress corresponds to a smaller nucleation size, nucleation of some events moves from the rheological transitions (where nucleation occurs in the cases with no stronger patch and with the patch of 20% higher normal stress) to the patches of higher normal stress. The patches nucleate both large, model-spanning, events, and small events that arrest soon after exiting the patch. Hence not every event that originates at the location of a potential seismic asperity is destined to be large, as its subsequent propagation is significantly influenced by the state of stress outside the patch.
Chen, Yong; Luo, Sheng; Chu, Haitao; Wei, Peng
2013-05-01
Multivariate meta-analysis is useful in combining evidence from independent studies which involve several comparisons among groups based on a single outcome. For binary outcomes, the commonly used statistical models for multivariate meta-analysis are multivariate generalized linear mixed effects models which assume risks, after some transformation, follow a multivariate normal distribution with possible correlations. In this article, we consider an alternative model for multivariate meta-analysis where the risks are modeled by the multivariate beta distribution proposed by Sarmanov (1966). This model have several attractive features compared to the conventional multivariate generalized linear mixed effects models, including simplicity of likelihood function, no need to specify a link function, and has a closed-form expression of distribution functions for study-specific risk differences. We investigate the finite sample performance of this model by simulation studies and illustrate its use with an application to multivariate meta-analysis of adverse events of tricyclic antidepressants treatment in clinical trials.
Charge collection and SEU mechanisms
NASA Astrophysics Data System (ADS)
Musseau, O.
1994-01-01
In the interaction of cosmic ions with microelectronic devices a dense electron-hole plasma is created along the ion track. Carriers are separated and transported by the electric field and under the action of the concentration gradient. The subsequent collection of these carriers induces a transient current at some electrical node of the device. This "ionocurrent" (single ion induced current) acts as any electrical perturbation in the device, propagating in the circuit and inducing failures. In bistable systems (registers, memories) the stored data can be upset. In clocked devices (microprocessors) the parasitic perturbation may propagate through the device to the outputs. This type of failure only effects the information, and do not degrade the functionally of the device. The purpose of this paper is to review the mechanisms of single event upset in microelectronic devices. Experimental and theoretical results are presented, and actual questions and problems are discussed. A brief introduction recalls the creation of the dense plasma of electron-hole pairs. The basic processes for charge collection in a simple np junction (drift and diffusion) are presented. The funneling-field effect is discussed and experimental results are compared to numerical simulations and semi-empirical models. Charge collection in actual microelectronic structures is then presented. Due to the parasitic elements, coupling effects are observed. Geometrical effects, in densely packed structures, results in multiple errors. Electronic couplings are due to the carriers in excess, acting as minority carriers, that trigger parasitic bipolar transistors. Single event upset of memory cells is discussed, based on numerical and experimental data. The main parameters for device characterization are presented. From the physical interpretation of charge collection mechanisms, the intrinsic sensitivity of various microelectronic technologies is determined and compared to experimental data. Scaling laws and future trends are finally discussed.
Recent Radiation Damage and Single Event Effect Results for Candidate Spacecraft Electronics
NASA Technical Reports Server (NTRS)
OBryan, Martha V.; LaBel, Kenneth A.; Reed, Robert A.; Ladbury, Ray L.; Howard, James W., Jr.; Buchner, Stephen P.; Barth, Janet L.; Kniffen, Scott D.; Seidleck, Christina M.; Marshall, Cheryl J.;
2001-01-01
We present data on the vulnerability of a variety of candidate spacecraft electronics to proton and heavy-ion induced single-event effects and proton-induced damage. Devices tested include optoelectronics, digital, analog, linear bipolar, hybrid devices, Analog-to-Digital Converters (ADCs), Digital-to-Analog Converters (DACs), and DC-DC converters, among others.
Current Single Event Effects and Radiation Damage Results for Candidate Spacecraft Electronics
NASA Technical Reports Server (NTRS)
OBryan, Martha V.; LaBel, Kenneth A.; Reed, Robert A.; Ladbury, Ray L.; Howard, James W., Jr.; Kniffin, Scott D.; Poivey, Christian; Buchner, Stephen P.; Bings, John P.; Titus, Jeff L.
2002-01-01
We present data on the vulnerability of a variety of candidate spacecraft electronics to proton and heavy ion induced single event effects, total ionizing dose and proton-induced damage. Devices tested include optoelectronics, digital, analog, linear bipolar, hybrid devices, Analog-to-Digital Converters (ADCs), Digital-to-Analog Converters (DACs), and DC-DC converters, among others.
NASA Technical Reports Server (NTRS)
Lauenstein, Jean-Marie
2016-01-01
The JEDEC JESD57 test standard, Procedures for the Measurement of Single-Event Effects in Semiconductor Devices from Heavy-Ion Irradiation, is undergoing its first revision since 1996. This presentation will provide an overview of some of the key proposed updates to the document.
NASA Astrophysics Data System (ADS)
Palumbo, Manuela; Ascione, Alessandra; Santangelo, Nicoletta; Santo, Antonio
2017-04-01
We present the first results of an analysis of flood hazard in ungauged mountain catchments that are associated with intensely urbanized alluvial fans. Assessment of hydrological hazard has been based on the integration of rainfall/runoff modelling of drainage basins with geomorphological analysis and mapping. Some small and steep, ungauged mountain catchments located in various areas of the southern Apennines, in southern Italy, have been chosen as test sites. In the last centuries, the selected basins have been subject to heavy and intense precipitation events, which have caused flash floods with serious damages in the correlated alluvial fan areas. Available spatial information (regional technical maps, DEMs, land use maps, geological/lithological maps, orthophotos) and an automated GIS-based procedure (ArcGis tools and ArcHydro tools) have been used to extract morphological, hydrological and hydraulic parameters. Such parameters have been used to run the HEC (Hydrologic Engineering Center of the US Army Corps of Engineers) software (GeoHMS, GeoRAS, HMS and RAS) based on rainfall-runoff models, which have allowed the hydrological and hydraulic simulations. As the floods occurred in the studied catchments have been debris flows dominated, the solid load simulation has been also performed. In order to validate the simulations, we have compared results of the modelling with the effects produced by past floods. Such effects have been quantified through estimations of both the sediment volumes within each catchment that have the potential to be mobilised (pre-event) during a sediment transfer event, and the volume of sediments delivered by the debris flows at basins' outlets (post-event). The post-event sediment volume has been quantified through post-event surveys and Lidar data. Evaluation of the pre-event sediment volumes in single catchments has been based on mapping of sediment storages that may constitute source zones of bed load transport and debris flows. For such an approach has been used a methodology that consists of the application of a process-based geomorphological mapping, based on data derived from GIS analysis using high-resolution DEMs, field measurements and aerial photograph interpretations. Our integrated approach, which allows quantification of the flow rate and a semi-quantitative assessment of sediment that can be mobilized during hydro-meteorological events, is applied for the first time to torrential catchmenmts of the southern Apennines and may significantly contribute to previsional studies aimed at risk mitigation in the study region.
NASA Astrophysics Data System (ADS)
Higginson, Drew P.
2017-11-01
We describe and justify a full-angle scattering (FAS) method to faithfully reproduce the accumulated differential angular Rutherford scattering probability distribution function (pdf) of particles in a plasma. The FAS method splits the scattering events into two regions. At small angles it is described by cumulative scattering events resulting, via the central limit theorem, in a Gaussian-like pdf; at larger angles it is described by single-event scatters and retains a pdf that follows the form of the Rutherford differential cross-section. The FAS method is verified using discrete Monte-Carlo scattering simulations run at small timesteps to include each individual scattering event. We identify the FAS regime of interest as where the ratio of temporal/spatial scale-of-interest to slowing-down time/length is from 10-3 to 0.3-0.7; the upper limit corresponds to Coulomb logarithm of 20-2, respectively. Two test problems, high-velocity interpenetrating plasma flows and keV-temperature ion equilibration, are used to highlight systems where including FAS is important to capture relevant physics.
NASA Astrophysics Data System (ADS)
Celik, Cihangir
Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano-scale technologies. Prevention of SEEs has been studied and applied in the semiconductor industry by including radiation protection precautions in the system architecture or by using corrective algorithms in the system operation. Decreasing 10B content (20%of natural boron) in the natural boron of Borophosphosilicate glass (BPSG) layers that are conventionally used in the fabrication of semiconductor devices was one of the major radiation protection approaches for the system architecture. Neutron interaction in the BPSG layer was the origin of the SEEs because of the 10B (n,alpha) 7Li reaction products. Both of the particles produced have the capability of ionization in the silicon substrate region, whose thickness is comparable to the ranges of these particles. Using the soft error phenomenon in exactly the opposite manner of the semiconductor industry can provide a new neutron detection system based on the SERs in the semiconductor memories. By investigating the soft error mechanisms in the available semiconductor memories and enhancing the soft error occurrences in these devices, one can convert all memory using intelligent systems into portable, power efficient, directiondependent neutron detectors. The Neutron Intercepting Silicon Chip (NISC) project aims to achieve this goal by introducing 10B-enriched BPSG layers to the semiconductor memory architectures. This research addresses the development of a simulation tool, the NISC Soft Error Analysis Tool (NISCSAT), for soft error modeling and analysis in the semiconductor memories to provide basic design considerations for the NISC. NISCSAT performs particle transport and calculates the soft error probabilities, or SER, depending on energy depositions of the particles in a given memory node model of the NISC. Soft error measurements were performed with commercially available, off-the-shelf semiconductor memories and microprocessors to observe soft error variations with the neutron flux and memory supply voltage. Measurement results show that soft errors in the memories increase proportionally with the neutron flux, whereas they decrease with increasing the supply voltages. NISC design considerations include the effects of device scaling, 10B content in the BPSG layer, incoming neutron energy, and critical charge of the node for this dissertation. NISCSAT simulations were performed with various memory node models to account these effects. Device scaling simulations showed that any further increase in the thickness of the BPSG layer beyond 2 mum causes self-shielding of the incoming neutrons due to the BPSG layer and results in lower detection efficiencies. Moreover, if the BPSG layer is located more than 4 mum apart from the depletion region in the node, there are no soft errors in the node due to the fact that both of the reaction products have lower ranges in the silicon or any possible node layers. Calculation results regarding the critical charge indicated that the mean charge deposition of the reaction products in the sensitive volume of the node is about 15 fC. It is evident that the NISC design should have a memory architecture with a critical charge of 15 fC or less to obtain higher detection efficiencies. Moreover, the sensitive volume should be placed in close proximity to the BPSG layers so that its location would be within the range of alpha and 7Li particles. Results showed that the distance between the BPSG layer and the sensitive volume should be less than 2 mum to increase the detection efficiency of the NISC. Incoming neutron energy was also investigated by simulations and the results obtained from these simulations showed that NISC neutron detection efficiency is related with the neutron cross-sections of 10B (n,alpha) 7Li reaction, e.g., ratio of the thermal (0.0253 eV) to fast (2 MeV) neutron detection efficiencies is approximately equal to 8000:1. Environmental conditions and their effects on the NISC performance were also studied in this research. Cosmic rays were modeled and simulated via NISCSAT to investigate detection reliability of the NISC. Simulation results show that cosmic rays account for less than 2 % of the soft errors for the thermal neutron detection. On the other hand, fast neutron detection by the NISC, which already has a poor efficiency due to the low neutron cross-sections, becomes almost impossible at higher altitudes where the cosmic ray fluxes and their energies are higher. NISCSAT simulations regarding soft error dependency of the NISC for temperature and electromagnetic fields show that there are no significant effects in the NISC detection efficiency. Furthermore, the detection efficiency of the NISC decreases with both air humidity and use of moderators since the incoming neutrons scatter away before reaching the memory surface.
Callan, Daniel E; Durantin, Gautier; Terzibas, Cengiz
2015-01-01
Application of neuro-augmentation technology based on dry-wireless EEG may be considerably beneficial for aviation and space operations because of the inherent dangers involved. In this study we evaluate classification performance of perceptual events using a dry-wireless EEG system during motion platform based flight simulation and actual flight in an open cockpit biplane to determine if the system can be used in the presence of considerable environmental and physiological artifacts. A passive task involving 200 random auditory presentations of a chirp sound was used for evaluation. The advantage of this auditory task is that it does not interfere with the perceptual motor processes involved with piloting the plane. Classification was based on identifying the presentation of a chirp sound vs. silent periods. Evaluation of Independent component analysis (ICA) and Kalman filtering to enhance classification performance by extracting brain activity related to the auditory event from other non-task related brain activity and artifacts was assessed. The results of permutation testing revealed that single trial classification of presence or absence of an auditory event was significantly above chance for all conditions on a novel test set. The best performance could be achieved with both ICA and Kalman filtering relative to no processing: Platform Off (83.4% vs. 78.3%), Platform On (73.1% vs. 71.6%), Biplane Engine Off (81.1% vs. 77.4%), and Biplane Engine On (79.2% vs. 66.1%). This experiment demonstrates that dry-wireless EEG can be used in environments with considerable vibration, wind, acoustic noise, and physiological artifacts and achieve good single trial classification performance that is necessary for future successful application of neuro-augmentation technology based on brain-machine interfaces.
Competing Pathways and Multiple Folding Nuclei in a Large Multidomain Protein, Luciferase.
Scholl, Zackary N; Yang, Weitao; Marszalek, Piotr E
2017-05-09
Proteins obtain their final functional configuration through incremental folding with many intermediate steps in the folding pathway. If known, these intermediate steps could be valuable new targets for designing therapeutics and the sequence of events could elucidate the mechanism of refolding. However, determining these intermediate steps is hardly an easy feat, and has been elusive for most proteins, especially large, multidomain proteins. Here, we effectively map part of the folding pathway for the model large multidomain protein, Luciferase, by combining single-molecule force-spectroscopy experiments and coarse-grained simulation. Single-molecule refolding experiments reveal the initial nucleation of folding while simulations corroborate these stable core structures of Luciferase, and indicate the relative propensities for each to propagate to the final folded native state. Both experimental refolding and Monte Carlo simulations of Markov state models generated from simulation reveal that Luciferase most often folds along a pathway originating from the nucleation of the N-terminal domain, and that this pathway is the least likely to form nonnative structures. We then engineer truncated variants of Luciferase whose sequences corresponded to the putative structure from simulation and we use atomic force spectroscopy to determine their unfolding and stability. These experimental results corroborate the structures predicted from the folding simulation and strongly suggest that they are intermediates along the folding pathway. Taken together, our results suggest that initial Luciferase refolding occurs along a vectorial pathway and also suggest a mechanism that chaperones may exploit to prevent misfolding. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Soil erosion under multiple time-varying rainfall events
NASA Astrophysics Data System (ADS)
Heng, B. C. Peter; Barry, D. Andrew; Jomaa, Seifeddine; Sander, Graham C.
2010-05-01
Soil erosion is a function of many factors and process interactions. An erosion event produces changes in surface soil properties such as texture and hydraulic conductivity. These changes in turn alter the erosion response to subsequent events. Laboratory-scale soil erosion studies have typically focused on single independent rainfall events with constant rainfall intensities. This study investigates the effect of multiple time-varying rainfall events on soil erosion using the EPFL erosion flume. The rainfall simulator comprises ten Veejet nozzles mounted on oscillating bars 3 m above a 6 m × 2 m flume. Spray from the nozzles is applied onto the soil surface in sweeps; rainfall intensity is thus controlled by varying the sweeping frequency. Freshly-prepared soil with a uniform slope was subjected to five rainfall events at daily intervals. In each 3-h event, rainfall intensity was ramped up linearly to a maximum of 60 mm/h and then stepped down to zero. Runoff samples were collected and analysed for particle size distribution (PSD) as well as total sediment concentration. We investigate whether there is a hysteretic relationship between sediment concentration and discharge within each event and how this relationship changes from event to event. Trends in the PSD of the eroded sediment are discussed and correlated with changes in sediment concentration. Close-up imagery of the soil surface following each event highlight changes in surface soil structure with time. This study enhances our understanding of erosion processes in the field, with corresponding implications for soil erosion modelling.
The distributed production system of the SuperB project: description and results
NASA Astrophysics Data System (ADS)
Brown, D.; Corvo, M.; Di Simone, A.; Fella, A.; Luppi, E.; Paoloni, E.; Stroili, R.; Tomassetti, L.
2011-12-01
The SuperB experiment needs large samples of MonteCarlo simulated events in order to finalize the detector design and to estimate the data analysis performances. The requirements are beyond the capabilities of a single computing farm, so a distributed production model capable of exploiting the existing HEP worldwide distributed computing infrastructure is needed. In this paper we describe the set of tools that have been developed to manage the production of the required simulated events. The production of events follows three main phases: distribution of input data files to the remote site Storage Elements (SE); job submission, via SuperB GANGA interface, to all available remote sites; output files transfer to CNAF repository. The job workflow includes procedures for consistency checking, monitoring, data handling and bookkeeping. A replication mechanism allows storing the job output on the local site SE. Results from 2010 official productions are reported.
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Siegel, Andrew R.
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.« less
Simulation of Runoff Concentration on Arable Fields and the Impact of Adapted Tillage Practises
NASA Astrophysics Data System (ADS)
Winter, F.; Disse, M.
2012-04-01
Conservational tillage can reduce runoff on arable fields. Due to crop residues remaining on the fields a seasonal constant ground cover is achieved. This additional soil cover not only decreases the drying of the topsoil but also reduces the mechanical impact of raindrops and the possibly resulting soil crust. Further implications of the mulch layer can be observed during heavy precipitation events and occurring surface runoff. The natural roughness of the ground surface is further increased and thus the flow velocity is decreased, resulting in an enhanced ability of runoff to infiltrate into the soil (so called Runon-Infiltration). The hydrological model system WaSiM-ETH hitherto simulates runoff concentration by a flow time grid in the catchment, which is derived from topographical features of the catchment during the preprocessing analysis. The retention of both surface runoff and interflow is modelled by a single reservoir in every discrete flow time zone until the outlet of a subcatchment is reached. For a more detailed analysis of the flow paths in catchments of the lower mesoscale (< 1 km2) the model was extended by a kinematic wave approach for the surface runoff concentration. This allows the simulation of small-scale variation in runoff generation and its temporal distribution in detail. Therefore the assessment of adapted tillage systems can be derived. On singular fields of the Scheyern research farm north-west of Munich it can be shown how different crops and tillage practises can influence runoff generation and concentration during single heavy precipitation events. From the simulation of individual events in agricultural areas of the lower mesoscale hydrologically susceptible areas can be identified and the positive impact of an adapted agricultural management on runoff generation and concentration can be quantifed.
NASA Astrophysics Data System (ADS)
Adloff, C.; Blaha, J.; Blaising, J.-J.; Drancourt, C.; Espargilière, A.; Gaglione, R.; Geffroy, N.; Karyotakis, Y.; Prast, J.; Vouters, G.; Francis, K.; Repond, J.; Smith, J.; Xia, L.; Baldolemar, E.; Li, J.; Park, S. T.; Sosebee, M.; White, A. P.; Yu, J.; Buanes, T.; Eigen, G.; Mikami, Y.; Watson, N. K.; Goto, T.; Mavromanolakis, G.; Thomson, M. A.; Ward, D. R.; Yan, W.; Benchekroun, D.; Hoummada, A.; Khoulaki, Y.; Benyamna, M.; Cârloganu, C.; Fehr, F.; Gay, P.; Manen, S.; Royer, L.; Blazey, G. C.; Dyshkant, A.; Lima, J. G. R.; Zutshi, V.; Hostachy, J.-Y.; Morin, L.; Cornett, U.; David, D.; Falley, G.; Gadow, K.; Göttlicher, P.; Günter, C.; Hermberg, B.; Karstensen, S.; Krivan, F.; Lucaci-Timoce, A.-I.; Lu, S.; Lutz, B.; Morozov, S.; Morgunov, V.; Reinecke, M.; Sefkow, F.; Smirnov, P.; Terwort, M.; Vargas-Trevino, A.; Feege, N.; Garutti, E.; Marchesini, I.; Ramilli, M.; Eckert, P.; Harion, T.; Kaplan, A.; Schultz-Coulon, H.-Ch; Shen, W.; Stamen, R.; Tadday, A.; Bilki, B.; Norbeck, E.; Onel, Y.; Wilson, G. W.; Kawagoe, K.; Dauncey, P. D.; Magnan, A.-M.; Wing, M.; Salvatore, F.; Calvo Alamillo, E.; Fouz, M.-C.; Puerta-Pelayo, J.; Balagura, V.; Bobchenko, B.; Chadeeva, M.; Danilov, M.; Epifantsev, A.; Markin, O.; Mizuk, R.; Novikov, E.; Rusinov, V.; Tarkovsky, E.; Kirikova, N.; Kozlov, V.; Smirnov, P.; Soloviev, Y.; Buzhan, P.; Dolgoshein, B.; Ilyin, A.; Kantserov, V.; Kaplin, V.; Karakash, A.; Popova, E.; Smirnov, S.; Kiesling, C.; Pfau, S.; Seidel, K.; Simon, F.; Soldner, C.; Szalay, M.; Tesar, M.; Weuste, L.; Bonis, J.; Bouquet, B.; Callier, S.; Cornebise, P.; Doublet, Ph; Dulucq, F.; Faucci Giannelli, M.; Fleury, J.; Li, H.; Martin-Chassard, G.; Richard, F.; de la Taille, Ch; Pöschl, R.; Raux, L.; Seguin-Moreau, N.; Wicek, F.; Anduze, M.; Boudry, V.; Brient, J.-C.; Jeans, D.; Mora de Freitas, P.; Musat, G.; Reinhard, M.; Ruan, M.; Videau, H.; Bulanek, B.; Zacek, J.; Cvach, J.; Gallus, P.; Havranek, M.; Janata, M.; Kvasnicka, J.; Lednicky, D.; Marcisovsky, M.; Polak, I.; Popule, J.; Tomasek, L.; Tomasek, M.; Ruzicka, P.; Sicho, P.; Smolik, J.; Vrba, V.; Zalesak, J.; Belhorma, B.; Ghazlane, H.; Takeshita, T.; Uozumi, S.; Sauer, J.; Weber, S.; Zeitnitz, C.
2012-09-01
The energy resolution of a highly granular 1 m3 analogue scintillator-steel hadronic calorimeter is studied using charged pions with energies from 10 GeV to 80 GeV at the CERN SPS. The energy resolution for single hadrons is determined to be approximately 58%/√E/GeV. This resolution is improved to approximately 45%/√E/GeV with software compensation techniques. These techniques take advantage of the event-by-event information about the substructure of hadronic showers which is provided by the imaging capabilities of the calorimeter. The energy reconstruction is improved either with corrections based on the local energy density or by applying a single correction factor to the event energy sum derived from a global measure of the shower energy density. The application of the compensation algorithms to geant4 simulations yield resolution improvements comparable to those observed for real data.
NASA Astrophysics Data System (ADS)
Covert, Ashley; Jordan, Peter
2010-05-01
To study the effects of wildfire burn severity on runoff generation and soil erosion from high intensity rainfall, we constructed an effective yet simple rainfall simulator that was inexpensive, portable and easily operated by two people on steep, forested slopes in southern British Columbia, Canada. The entire apparatus, including simulator, pumps, hoses, collapsible water bladders and sample bottles, was designed to fit into a single full-sized pick-up truck. The three-legged simulator extended to approximately 3.3 metres above ground on steep slopes and used a single Spraying Systems 1/2HH-30WSQ nozzle which can easily be interchanged for other sized nozzles. Rainfall characteristics were measured using a digital camera which took images of the raindrops against a grid. Median drop size and velocity 5 cm above ground were measured and found to be 3/4 of the size of natural rain drops of that diameter class, and fell 7% faster than terminal velocity. The simulator was used for experiments on runoff and erosion on sites burned in 2007 by two wildfires in southern British Columbia. Simulations were repeated one and two years after the fires. Rainfall was simulated at an average rate of 67 mm hr-1 over a 1 m2 plot for 20 minutes. This rainfall rate is similar to the 100 year return period rainfall intensity for this duration at a nearby weather station. Simulations were conducted on five replicate 1 m2 plots in each experimental unit including high burn severity, moderate burn severity, unburned, and unburned with forest floor removed. During the simulation a sample was collected for 30 seconds every minute, with two additional samples until runoff ceased, resulting in 22 samples per simulation. Runoff, overland flow coefficient, infiltration and sediment yield were compared between treatments. Additional simulations were conducted immediately after a 2009 wildfire to test different mulch treatments. Typical results showed that runoff on plots with high burn severity and with forest floor removed was similar, reaching on average a steady rate of about 60% of rainfall rate after about 7 minutes. Runoff on unburned plots with intact forest floor was much lower, typically less than 20% of rainfall rate. Sediment yield was greatest on plots with forest floor removed, followed by severely burned plots. Sediment yield on unburned and moderately burned plots was very low to zero. These results are consistent with qualitative observations made following several extreme rainfall events on recent burns in the region.
Monte-Carlo Event Generators for Jet Modification in d(p)-A and A-A Collisions
NASA Astrophysics Data System (ADS)
Kordell, Michael C., III
This work outlines methods to use jet simulations to study both initial and final state nuclear effects in heavy-ion collisions. To study the initial state of heavy-ion collisions, the production of jets and high momentum hadrons from jets, produced in deuteron (d)-Au collisions at the relativistic heavy-ion collider (RHIC) and proton (p)- Pb collisions at the large hadron collider (LHC) are studied as a function of centrality, a measure of the impact parameter of the collision. A modified version of the event generator PYTHIA, widely used to simulate p-p collisions, is used in conjunction with a nuclear Monte-Carlo event generator which simulates the locations of the nucleons within a large nucleus. It is demonstrated how events with a hard jet may be simulated, in such a way that the parton distribution function of the projectile is frozen during its interaction with the extended nucleus. Using this approach, it is demonstrated that the puzzling enhancement seen in peripheral events at RHIC and the LHC, as well as the suppression seen in central events at the LHC are mainly due to mis-binning of central and semi-central events, containing a jet, as peripheral events. This occurs due to the suppression of soft particle production away from the jet, caused by the depletion of energy available in a nucleon of the deuteron (in d-Au at RHIC) or in the proton (in p-Pb at LHC), after the production of a hard jet. In conclusion, partonic correlations built out of simple energy conservation are responsible for such an effect, though these are sampled at the hard scale of jet production and, as such, represent smaller states. To study final state nuclear effects, the modification of hard jets in the Quark Gluon Plasma (QGP) is simulated using the MATTER event generator. Based on the higher twist formalism of energy loss, the MATTER event generator simulates the evolution of highly virtual partons through a medium. These partons sampled from an underlying PYTHIA kernel undergo splitting through a combination of vacuum and medium induced emission. The momentum exchange with the medium is simulated via the jet transport coefficient q̂, which is assumed to scale with the entropy density at a given location in the medium. The entropy density is obtained from a relativistic viscous fluid dynamics simulation (VISH2+1D) in 2+1 space time dimensions. Results for jet and hadron observables are presented using an independent fragmentation model.
NASA Astrophysics Data System (ADS)
Zhuang, J.; Vere-Jones, D.; Ogata, Y.; Christophersen, A.; Savage, M. K.; Jackson, D. D.
2008-12-01
In this study we investigate the foreshock probabilities calculated from earthquake catalogs from Japan, Southern California and New Zealand. Unlike conventional studies on foreshocks, we use a probability-based declustering method to separate each catalog into stochastic versions of family trees, such that each event is classified as either having been triggered by a preceding event, or being a spontaneous event. The probabilities are determined from parameters that provide the best fit of the real catalogue using a space- time epidemic-type aftershock sequence (ETAS) model. The model assumes that background and triggered earthquakes have the same magnitude dependent triggering capability. A foreshock here is defined as a spontaneous event that has one or more larger descendants, and a triggered foreshock is a triggered event that has one or more larger descendants. The proportion of foreshocks in spontaneous events of each catalog is found to be lower than the proportion of triggered foreshocks in triggered events. One possibility is that this is due to different triggering productivity in spontaneous versus triggered events, i.e., a triggered event triggers more children than a spontaneous events of the same magnitude. To understand what causes the above differences between spontaneous and triggered events, we apply the same procedures to several synthetic catalogs simulated by using different models. The first simulation is done by using the ETAS model with parameters and spontaneous rate fitted from the JMA catalog. The second synthetic catalog is simulated by using an adjusted ETAS model that takes into account the triggering effect from events lower than the magnitude. That is, we simulated the catalog with a low magnitude threshold with the original ETAS model, and then we remove the events smaller than a higher magnitude threshold. The third model for simulation assumes that different triggering behaviors exist between spontaneous event and triggered events. We repeat the fitting and reconstruction procedures to all those simulated catalogs. The reconstruction results for the first synthetic catalog do not show the difference between spontaneous events and triggered event or the differences in foreshock probabilities. On the other hand, results from the synthetic catalogs simulated with the second and the third models clearly reconstruct such differences. In summary our results implies that one of the causes of such differences may be neglecting the triggering effort from events smaller than the cut-off magnitude or magnitude errors. For the objective of forecasting seismicity, we can use a clustering model in which spontaneous events trigger child events in a different way from triggered events to avoid over-predicting earthquake risks with foreshocks. To understand the physical implication of this study, we need further careful studies to compare the real seismicity and the adjusted ETAS model, which takes the triggering effect from events below the cut-off magnitude into account.
SEE Transient Response of Crane Interpoint Single Output Point of Load DC-DC Converters
NASA Technical Reports Server (NTRS)
Sanders, Anthony B.; Chen, Dakai; Kim, Hak S.; Phan, Anthony M.
2011-01-01
This study was undertaken to determine the single event effect and transient susceptibility of the Crane Interpoint Maximum Flexible Power (MFP) Single Output Point of Load DC/DC Converters for transient interruptions in the output signal and for destructive and non destructive events induced by exposing it to a heavy ion beam..
Caffeine and sports performance.
Burke, Louise M
2008-12-01
Athletes are among the groups of people who are interested in the effects of caffeine on endurance and exercise capacity. Although many studies have investigated the effect of caffeine ingestion on exercise, not all are suited to draw conclusions regarding caffeine and sports performance. Characteristics of studies that can better explore the issues of athletes include the use of well-trained subjects, conditions that reflect actual practices in sport, and exercise protocols that simulate real-life events. There is a scarcity of field-based studies and investigations involving elite performers. Researchers are encouraged to use statistical analyses that consider the magnitude of changes, and to establish whether these are meaningful to the outcome of sport. The available literature that follows such guidelines suggests that performance benefits can be seen with moderate amounts (~3 mg.kg-1 body mass) of caffeine. Furthermore, these benefits are likely to occur across a range of sports, including endurance events, stop-and-go events (e.g., team and racquet sports), and sports involving sustained high-intensity activity lasting from 1-60 min (e.g., swimming, rowing, and middle and distance running races). The direct effects on single events involving strength and power, such as lifts, throws, and sprints, are unclear. Further studies are needed to better elucidate the range of protocols (timing and amount of doses) that produce benefits and the range of sports to which these may apply. Individual responses, the politics of sport, and the effects of caffeine on other goals, such as sleep, hydration, and refuelling, also need to be considered.
Transition model for ricin-aptamer interactions with multiple pathways and energy barriers
NASA Astrophysics Data System (ADS)
Wang, Bin; Xu, Bingqian
2014-02-01
We develop a transition model to interpret single-molecule ricin-aptamer interactions with multiple unbinding pathways and energy barriers measured by atomic force microscopy dynamic force spectroscopy. Molecular simulations establish the relationship between binding conformations and the corresponding unbinding pathways. Each unbinding pathway follows a Bell-Evans multiple-barrier model. Markov-type transition matrices are developed to analyze the redistribution of unbinding events among the pathways under different loading rates. Our study provides detailed information about complex behaviors in ricin-aptamer unbinding events.
Rainfall simulations on a fire disturbed mediterranean area
NASA Astrophysics Data System (ADS)
Rulli, Maria Cristina; Bozzi, Silvia; Spada, Matteo; Bocchiola, Daniele; Rosso, Renzo
2006-08-01
SummaryRainfall simulator experiments were carried out in the Liguria region, Italy, immediately after a forest fire in early August, 2003, to evaluate the effects of forest fire on soil hydraulic properties, runoff and erosion. Two adjacent 30 m 2 plots were set up with common physiographic features, and the same fire history, except for the fire of August 2003, which burned only one of them. Since both plots were previously subject to the passage of fire in March 1997, one compares the hydrologic and sedimentologic response of an area burned in year 2003 (B03) with that of an area burnt 6 years before (B97). Each rainfall simulation consisted of a single 60 min application of rainfall with constant intensity of about 76 mm h -1. The results show runoff ratio, evaluated for different pre-event soil moisture conditions, ranging from 0% to 2% for B97 plot, and from 21% to 41% for B03. Runoff ratio for the recently burned plot was 60 times higher than for the plot burned six years before, under wet conditions, and 20 times higher, under very wet conditions. A large increase in sediment production also was measured in B03 plot, as compared with that in B97 plot. Suspended sediment yield from B03 plot was more than two orders of magnitude higher than that from B97 plot in all the simulated events. The high soil losses measured immediately after burning indicate that effective post-fire rehabilitation programs must be carried out to reduce soil erosion in recently burned areas. However, the results for the plot burned six year prior show that recovery of the hydrological properties of the soil occurs after the transient post-fire modification.
Disaster Response Modeling Through Discrete-Event Simulation
NASA Technical Reports Server (NTRS)
Wang, Jeffrey; Gilmer, Graham
2012-01-01
Organizations today are required to plan against a rapidly changing, high-cost environment. This is especially true for first responders to disasters and other incidents, where critical decisions must be made in a timely manner to save lives and resources. Discrete-event simulations enable organizations to make better decisions by visualizing complex processes and the impact of proposed changes before they are implemented. A discrete-event simulation using Simio software has been developed to effectively analyze and quantify the imagery capabilities of domestic aviation resources conducting relief missions. This approach has helped synthesize large amounts of data to better visualize process flows, manage resources, and pinpoint capability gaps and shortfalls in disaster response scenarios. Simulation outputs and results have supported decision makers in the understanding of high risk locations, key resource placement, and the effectiveness of proposed improvements.
Validation of ground-motion simulations for historical events using SDoF systems
Galasso, C.; Zareian, F.; Iervolino, I.; Graves, R.W.
2012-01-01
The study presented in this paper is among the first in a series of studies toward the engineering validation of the hybrid broadband ground‐motion simulation methodology by Graves and Pitarka (2010). This paper provides a statistical comparison between seismic demands of single degree of freedom (SDoF) systems subjected to past events using simulations and actual recordings. A number of SDoF systems are selected considering the following: (1) 16 oscillation periods between 0.1 and 6 s; (2) elastic case and four nonlinearity levels, from mildly inelastic to severely inelastic systems; and (3) two hysteretic behaviors, in particular, nondegrading–nonevolutionary and degrading–evolutionary. Demand spectra are derived in terms of peak and cyclic response, as well as their statistics for four historical earthquakes: 1979 Mw 6.5 Imperial Valley, 1989 Mw 6.8 Loma Prieta, 1992 Mw 7.2 Landers, and 1994 Mw 6.7 Northridge.
Multiple Sensing Application on Wireless Sensor Network Simulation using NS3
NASA Astrophysics Data System (ADS)
Kurniawan, I. F.; Bisma, R.
2018-01-01
Hardware enhancement provides opportunity to install various sensor device on single monitoring node which then enables users to acquire multiple data simultaneously. Constructing multiple sensing application in NS3 is a challenging task since numbers of aspects such as wireless communication, packet transmission pattern, and energy model must be taken into account. Despite of numerous types of monitoring data available, this study only considers two types such as periodic, and event-based data. Periodical data will generate monitoring data follows configured interval, while event-based transmit data when certain determined condition is met. Therefore, this study attempts to cover mentioned aspects in NS3. Several simulations are performed with different number of nodes on arbitrary communication scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the World- wide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; ...
2016-09-29
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less
Criticality of Low-Energy Protons in Single-Event Effects Testing of Highly-Scaled Technologies
NASA Technical Reports Server (NTRS)
Pellish, Jonathan A.; Marshall, Paul W.; Rodbell, Kenneth P.; Gordon, Michael S.; LaBel, Kenneth A.; Schwank, James R.; Dodds, Nathaniel A.; Castaneda, Carlos M.; Berg, Melanie D.; Kim, Hak S.;
2014-01-01
We report low-energy proton and low-energy alpha particle single-event effects (SEE) data on a 32 nm silicon-on-insulator (SOI) complementary metal oxide semiconductor (CMOS) latches and static random access memory (SRAM) that demonstrates the criticality of using low-energy protons for SEE testing of highly-scaled technologies. Low-energy protons produced a significantly higher fraction of multi-bit upsets relative to single-bit upsets when compared to similar alpha particle data. This difference highlights the importance of performing hardness assurance testing with protons that include energy distribution components below 2 megaelectron-volt. The importance of low-energy protons to system-level single-event performance is based on the technology under investigation as well as the target radiation environment.
All-Atom Simulations Reveal How Single-Point Mutations Promote Serpin Misfolding
NASA Astrophysics Data System (ADS)
Wang, Fang; Orioli, Simone; Ianeselli, Alan; Spagnolli, Giovanni; a Beccara, Silvio; Gershenson, Anne; Faccioli, Pietro; Wintrode, Patrick L.
2018-05-01
Protein misfolding is implicated in many diseases, including the serpinopathies. For the canonical inhibitory serpin {\\alpha}1-antitrypsin (A1AT), mutations can result in protein deficiencies leading to lung disease, and misfolded mutants can accumulate in hepatocytes leading to liver disease. Using all-atom simulations based on the recently developed Bias Functional algorithm we elucidate how wild-type A1AT folds and how the disease-associated S (Glu264Val) and Z (Glu342Lys) mutations lead to misfolding. The deleterious Z mutation disrupts folding at an early stage, while the relatively benign S mutant shows late stage minor misfolding. A number of suppressor mutations ameliorate the effects of the Z mutation and simulations on these mutants help to elucidate the relative roles of steric clashes and electrostatic interactions in Z misfolding. These results demonstrate a striking correlation between atomistic events and disease severity and shine light on the mechanisms driving chains away from their correct folding routes.
Radiation Effects in Advanced Multiple Gate and Silicon-on-Insulator Transistors
NASA Astrophysics Data System (ADS)
Simoen, Eddy; Gaillardin, Marc; Paillet, Philippe; Reed, Robert A.; Schrimpf, Ron D.; Alles, Michael L.; El-Mamouni, Farah; Fleetwood, Daniel M.; Griffoni, Alessio; Claeys, Cor
2013-06-01
The aim of this review paper is to describe in a comprehensive manner the current understanding of the radiation response of state-of-the-art Silicon-on-Insulator (SOI) and FinFET CMOS technologies. Total Ionizing Dose (TID) response, heavy-ion microdose effects and single-event effects (SEEs) will be discussed. It is shown that a very high TID tolerance can be achieved by narrow-fin SOI FinFET architectures, while bulk FinFETs may exhibit similar TID response to the planar devices. Due to the vertical nature of FinFETs, a specific heavy-ion response can be obtained, whereby the angle of incidence becomes highly important with respect to the vertical sidewall gates. With respect to SEE, the buried oxide in the SOI FinFETs suppresses the diffusion tails from the charge collection in the substrate compared to the planar bulk FinFET devices. Channel lengths and fin widths are now comparable to, or smaller than the dimensions of the region affected by the single ionizing ions or lasers used in testing. This gives rise to a high degree of sensitivity to individual device parameters and source-drain shunting during ion-beam or laser-beam SEE testing. Simulations are used to illuminate the mechanisms observed in radiation testing and the progress and needs for the numerical modeling/simulation of the radiation response of advanced SOI and FinFET transistors are highlighted.
Casciano, Roman; Chulikavit, Maruit; Di Lorenzo, Giuseppe; Liu, Zhimei; Baladi, Jean-Francois; Wang, Xufang; Robertson, Justin; Garrison, Lou
2011-01-01
A recent indirect comparison study showed that sunitinib-refractory metastatic renal cell carcinoma (mRCC) patients treated with everolimus are expected to have improved overall survival outcomes compared to patients treated with sorafenib. This analysis examines the likely cost-effectiveness of everolimus versus sorafenib in this setting from a US payer perspective. A Markov model was developed to simulate a cohort of sunitinib-refractory mRCC patients and to estimate the cost per incremental life-years gained (LYG) and quality-adjusted life-years (QALYs) gained. Markov states included are stable disease without adverse events, stable disease with adverse events, disease progression, and death. Transition probabilities were estimated using a subset of the RECORD-1 patient population receiving everolimus after sunitinib, and a comparable population receiving sorafenib in a single-arm phase II study. Costs of antitumor therapies were based on wholesale acquisition cost. Health state costs accounted for physician visits, tests, adverse events, postprogression therapy, and end-of-life care. The model extrapolated beyond the trial time horizon for up to 6 years based on published trial data. Deterministic and probabilistic sensitivity analyses were conducted. The estimated gain over sorafenib treatment was 1.273 LYs (0.916 QALYs) at an incremental cost of $81,643. The deterministic analysis resulted in an incremental cost-effectiveness ratio (ICER) of $64,155/LYG ($89,160/QALY). The probabilistic sensitivity analysis demonstrated that results were highly consistent across simulations. As the ICER fell within the cost per QALY range for many other widely used oncology medicines, everolimus is projected to be a cost-effective treatment relative to sorafenib for sunitinib-refractory mRCC. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Effects of complex life cycles on genetic diversity: cyclical parthenogenesis
Rouger, R; Reichel, K; Malrieu, F; Masson, J P; Stoeckel, S
2016-01-01
Neutral patterns of population genetic diversity in species with complex life cycles are difficult to anticipate. Cyclical parthenogenesis (CP), in which organisms undergo several rounds of clonal reproduction followed by a sexual event, is one such life cycle. Many species, including crop pests (aphids), human parasites (trematodes) or models used in evolutionary science (Daphnia), are cyclical parthenogens. It is therefore crucial to understand the impact of such a life cycle on neutral genetic diversity. In this paper, we describe distributions of genetic diversity under conditions of CP with various clonal phase lengths. Using a Markov chain model of CP for a single locus and individual-based simulations for two loci, our analysis first demonstrates that strong departures from full sexuality are observed after only a few generations of clonality. The convergence towards predictions made under conditions of full clonality during the clonal phase depends on the balance between mutations and genetic drift. Second, the sexual event of CP usually resets the genetic diversity at a single locus towards predictions made under full sexuality. However, this single recombination event is insufficient to reshuffle gametic phases towards full-sexuality predictions. Finally, for similar levels of clonality, CP and acyclic partial clonality (wherein a fixed proportion of individuals are clonally produced within each generation) differentially affect the distribution of genetic diversity. Overall, this work provides solid predictions of neutral genetic diversity that may serve as a null model in detecting the action of common evolutionary or demographic processes in cyclical parthenogens (for example, selection or bottlenecks). PMID:27436524
NASA Technical Reports Server (NTRS)
Chen, Dakai; Phan, Anthony; Kim, Hak; Swonger, James; Musil, Paul; LaBel, Kenneth
2013-01-01
We show examples of single event functional interrupt and destructive failure in modern POL devices. The increasing complexity and diversity of the design and process introduce hard SEE modes that are triggered by various mechanisms.
Radiation Characteristics of a 0.11 Micrometer Modified Commercial CMOS Process
NASA Technical Reports Server (NTRS)
Poivey, Christian; Kim, Hak; Berg, Melanie D.; Forney, Jim; Seidleck, Christina; Vilchis, Miguel A.; Phan, Anthony; Irwin, Tim; LaBel, Kenneth A.; Saigusa, Rajan K.;
2006-01-01
We present radiation data, Total Ionizing Dose and Single Event Effects, on the LSI Logic 0.11 micron commercial process and two modified versions of this process. Modified versions include a buried layer to guarantee Single Event Latchup immunity.
Evaluation of Precipitation Simulated by Seven SCMs against the ARM Observations at the SGP Site
NASA Technical Reports Server (NTRS)
Song, Hua; Lin, Wuyin; Lin, Yanluan; Wolf, Audrey B.; Neggers, Roel; Donner, Leo J.; Del Genio, Anthony D.; Liu, Yangang
2013-01-01
This study evaluates the performances of seven single-column models (SCMs) by comparing simulated surface precipitation with observations at the Atmospheric Radiation Measurement Program Southern Great Plains (SGP) site from January 1999 to December 2001. Results show that although most SCMs can reproduce the observed precipitation reasonably well, there are significant and interesting differences in their details. In the cold season, the model-observation differences in the frequency and mean intensity of rain events tend to compensate each other for most SCMs. In the warm season, most SCMs produce more rain events in daytime than in nighttime, whereas the observations have more rain events in nighttime. The mean intensities of rain events in these SCMs are much stronger in daytime, but weaker in nighttime, than the observations. The higher frequency of rain events during warm-season daytime in most SCMs is related to the fact that most SCMs produce a spurious precipitation peak around the regime of weak vertical motions but rich in moisture content. The models also show distinct biases between nighttime and daytime in simulating significant rain events. In nighttime, all the SCMs have a lower frequency of moderate-to-strong rain events than the observations for both seasons. In daytime, most SCMs have a higher frequency of moderate-to-strong rain events than the observations, especially in the warm season. Further analysis reveals distinct meteorological backgrounds for large underestimation and overestimation events. The former occur in the strong ascending regimes with negative low-level horizontal heat and moisture advection, whereas the latter occur in the weak or moderate ascending regimes with positive low-level horizontal heat and moisture advection.
Pseudo-global warming controls on the intensity and morphology of extreme convective storm events
NASA Astrophysics Data System (ADS)
Trapp, R. J.
2015-12-01
This research seeks to answer the basic question of how current-day extreme convective storm events might be represented under future anthropogenic climate change. We adapt the "pseudo-global warming" (PGW) methodology employed by Lackmann (2013, 2015) and others, who have investigated flooding and tropical cyclone events under climate change. Here, we exploit coupled atmosphere-ocean GCM data contributed to the CMIP5 archive, and take the mean 3D atmospheric state simulated during May 1990-1999 and subtract it from that simulated during May 2090-2099. Such 3D changes in temperature, humidity, geopotential height, and winds are added to synoptic/meso-scale analyses (NAM-ANL) of specific events, and this modified atmospheric state is then used for initial and boundary conditions for real-data WRF model simulations of the events at high resolution. Comparison of an ensemble of these simulations with control (CTRL) simulations facilitates assessment of PGW effects. In contrast to the robust development of supercellular convection in our CTRL simulations, the combined effects of increased CIN and decreased forcing under PGW led to a failure of convection initiation in many of our ensemble members. Those members that had sufficient matching between the CIN and forcing tended to generate stronger convective updrafts than in the CTRL simulations, because of the relatively higher CAPE under PGW. And, the members with enhanced updrafts also tended to have enhanced vertical rotation. In fact, such mesocyclonic rotation and attendant supercellular morphology were even found in simulations that were driven with PGW-reduced environmental wind shear.
Neutron Particle Effects on a Quad-Redundant Flight Control Computer
NASA Technical Reports Server (NTRS)
Eure, Kenneth; Belcastro, Celeste M.; Gray, W Steven; Gonzalex, Oscar
2003-01-01
This paper describes a single-event upset experiment performed at the Los Alamos National Laboratory. A closed-loop control system consisting of a Quad-Redundant Flight Control Computer (FCC) and a B737 simulator was operated while the FCC was exposed to a neutron beam. The purpose of this test was to analyze the effects of neutron bombardment on avionics control systems operating at altitudes where neutron strikes are probable. The neutron energy spectrum produced at the Los Alamos National Laboratory is similar in shape to the spectrum of atmospheric neutrons but much more intense. The higher intensity results in accelerated life tests that are representative of the actual neutron radiation that a FCC may receive over a period of years.
Cosmic Chandlery with thermonuclear supernovae
Calder, Alan C.; Krueger, Brendan K.; Jackson, A. P.; ...
2017-05-30
Thermonuclear (Type Ia) supernovae are bright stellar explosions, the light curves of which can be calibrated to allow for use as "standard candles" for measuring cosmological distances. Contemporary research investigates how the brightness of an event may be influenced by properties of the progenitor system that follow from properties of the host galaxy such as composition and age. The goals are to better understand systematic effects and to assess the intrinsic scatter in the brightness, thereby reducing uncertainties in cosmological studies. We present the results from ensembles of simulations in the single-degenerate paradigm addressing the influence of age and metallicitymore » on the brightness of an event and compare our results to observed variations of brightness that correlate with properties of the host galaxy. As a result, we also present results from "hybrid" progenitor models that incorporate recent advances in stellar evolution.« less
Run-up Variability due to Source Effects
NASA Astrophysics Data System (ADS)
Del Giudice, Tania; Zolezzi, Francesca; Traverso, Chiara; Valfrè, Giulio; Poggi, Pamela; Parker, Eric J.
2010-05-01
This paper investigates the variability of tsunami run-up at a specific location due to uncertainty in earthquake source parameters. It is important to quantify this 'inter-event' variability for probabilistic assessments of tsunami hazard. In principal, this aspect of variability could be studied by comparing field observations at a single location from a number of tsunamigenic events caused by the same source. As such an extensive dataset does not exist, we decided to study the inter-event variability through numerical modelling. We attempt to answer the question 'What is the potential variability of tsunami wave run-up at a specific site, for a given magnitude earthquake occurring at a known location'. The uncertainty is expected to arise from the lack of knowledge regarding the specific details of the fault rupture 'source' parameters. The following steps were followed: the statistical distributions of the main earthquake source parameters affecting the tsunami height were established by studying fault plane solutions of known earthquakes; a case study based on a possible tsunami impact on Egypt coast has been set up and simulated, varying the geometrical parameters of the source; simulation results have been analyzed deriving relationships between run-up height and source parameters; using the derived relationships a Monte Carlo simulation has been performed in order to create the necessary dataset to investigate the inter-event variability of the run-up height along the coast; the inter-event variability of the run-up height along the coast has been investigated. Given the distribution of source parameters and their variability, we studied how this variability propagates to the run-up height, using the Cornell 'Multi-grid coupled Tsunami Model' (COMCOT). The case study was based on the large thrust faulting offshore the south-western Greek coast, thought to have been responsible for the infamous 1303 tsunami. Numerical modelling of the event was used to assess the impact on the North African coast. The effects of uncertainty in fault parameters were assessed by perturbing the base model, and observing variation on wave height along the coast. The tsunami wave run-up was computed at 4020 locations along the Egyptian coast between longitudes 28.7 E and 33.8 E. To assess the effects of fault parameters uncertainty, input model parameters have been varied and effects on run-up have been analyzed. The simulations show that for a given point there are linear relationships between run-up and both fault dislocation and rupture length. A superposition analysis shows that a linear combination of the effects of the different source parameters (evaluated results) leads to a good approximation of the simulated results. This relationship is then used as the basis for a Monte Carlo simulation. The Monte Carlo simulation was performed for 1600 scenarios at each of the 4020 points along the coast. The coefficient of variation (the ratio between standard deviation of the results and the average of the run-up heights along the coast) is comprised between 0.14 and 3.11 with an average value along the coast equal to 0.67. The coefficient of variation of normalized run-up has been compared with the standard deviation of spectral acceleration attenuation laws used for probabilistic seismic hazard assessment studies. These values have a similar meaning, and the uncertainty in the two cases is similar. The 'rule of thumb' relationship between mean and sigma can be expressed as follows: ?+ σ ≈ 2?. The implication is that the uncertainty in run-up estimation should give a range of values within approximately two times the average. This uncertainty should be considered in tsunami hazard analysis, such as inundation and risk maps, evacuation plans and the other related steps.
Mixed response and time-to-event endpoints for multistage single-arm phase II design.
Lai, Xin; Zee, Benny Chung-Ying
2015-06-04
The objective of phase II cancer clinical trials is to determine if a treatment has sufficient activity to warrant further study. The efficiency of a conventional phase II trial design has been the object of considerable debate, particularly when the study regimen is characteristically cytostatic. At the time of development of a phase II cancer trial, we accumulated clinical experience regarding the time to progression (TTP) for similar classes of drugs and for standard therapy. By considering the time to event (TTE) in addition to the tumor response endpoint, a mixed-endpoint phase II design may increase the efficiency and ability of selecting promising cytotoxic and cytostatic agents for further development. We proposed a single-arm phase II trial design by extending the Zee multinomial method to fully use mixed endpoints with tumor response and the TTE. In this design, the dependence between the probability of response and the TTE outcome is modeled through a Gaussian copula. Given the type I and type II errors and the hypothesis as defined by the response rate (RR) and median TTE, such as median TTP, the decision rules for a two-stage phase II trial design can be generated. We demonstrated through simulation that the proposed design has a smaller expected sample size and higher early stopping probability under the null hypothesis than designs based on a single-response endpoint or a single TTE endpoint. The proposed design is more efficient for screening new cytotoxic or cytostatic agents and less likely to miss an effective agent than the alternative single-arm design.
NASA Astrophysics Data System (ADS)
Lawrence, Thomas; Long, Antony J.; Gehrels, W. Roland; Jackson, Luke P.; Smith, David E.
2016-11-01
The most significant climate cooling of the Holocene is centred on 8.2 kyr BP (the '8.2 event'). Its cause is widely attributed to an abrupt slowdown of the Atlantic Meridional Overturning Circulation (AMOC) associated with the sudden drainage of Laurentide proglacial Lakes Agassiz and Ojibway, but model simulations have difficulty reproducing the event with a single-pulse scenario of freshwater input. Several lines of evidence point to multiple episodes of freshwater release from the decaying Laurentide Ice Sheet (LIS) between ∼8900 and ∼8200 cal yr BP, yet the precise number, timing and magnitude of these events - critical constraints for AMOC simulations - are far from resolved. Here we present a high-resolution relative sea level (RSL) record for the period 8800 to 7800 cal yr BP developed from estuarine and salt-marsh deposits in SW Scotland. We find that RSL rose abruptly in three steps by 0.35 m, 0.7 m and 0.4 m (mean) at 8760-8640, 8595-8465, 8323-8218 cal yr BP respectively. The timing of these RSL steps correlate closely with short-lived events expressed in North Atlantic proxy climate and oceanographic records, providing evidence of at least three distinct episodes of enhanced meltwater discharge from the decaying LIS prior to the 8.2 event. Our observations can be used to test the fidelity of both climate and ice-sheet models in simulating abrupt change during the early Holocene.
NASA Astrophysics Data System (ADS)
Caldwell, Douglas Wyche
Commercial microcontrollers--monolithic integrated circuits containing microprocessor, memory and various peripheral functions--such as are used in industrial, automotive and military applications, present spacecraft avionics system designers an appealing mix of higher performance and lower power together with faster system-development time and lower unit costs. However, these parts are not radiation-hardened for application in the space environment and Single-Event Effects (SEE) caused by high-energy, ionizing radiation present a significant challenge. Mitigating these effects with techniques which require minimal additional support logic, and thereby preserve the high functional density of these devices, can allow their benefits to be realized. This dissertation uses fault-tolerance to mitigate the transient errors and occasional latchups that non-hardened microcontrollers can experience in the space radiation environment. Space systems requirements and the historical use of fault-tolerant computers in spacecraft provide context. Space radiation and its effects in semiconductors define the fault environment. A reference architecture is presented which uses two or three microcontrollers with a combination of hardware and software voting techniques to mitigate SEE. A prototypical spacecraft function (an inertial measurement unit) is used to illustrate the techniques and to explore how real application requirements impact the fault-tolerance approach. Low-cost approaches which leverage features of existing commercial microcontrollers are analyzed. A high-speed serial bus is used for voting among redundant devices and a novel wire-OR output voting scheme exploits the bidirectional controls of I/O pins. A hardware testbed and prototype software were constructed to evaluate two- and three-processor configurations. Simulated Single-Event Upsets (SEUs) were injected at high rates and the response of the system monitored. The resulting statistics were used to evaluate technical effectiveness. Fault-recovery probabilities (coverages) higher than 99.99% were experimentally demonstrated. The greater than thousand-fold reduction in observed effects provides performance comparable with SEE tolerance of tested, rad-hard devices. Technical results were combined with cost data to assess the cost-effectiveness of the techniques. It was found that a three-processor system was only marginally more effective than a two-device system at detecting and recovering from faults, but consumed substantially more resources, suggesting that simpler configurations are generally more cost-effective.
Brooks, John M; Chapman, Cole G; Schroeder, Mary C
2018-06-01
Patient-centred care requires evidence of treatment effects across many outcomes. Outcomes can be beneficial (e.g. increased survival or cure rates) or detrimental (e.g. adverse events, pain associated with treatment, treatment costs, time required for treatment). Treatment effects may also be heterogeneous across outcomes and across patients. Randomized controlled trials are usually insufficient to supply evidence across outcomes. Observational data analysis is an alternative, with the caveat that the treatments observed are choices. Real-world treatment choice often involves complex assessment of expected effects across the array of outcomes. Failure to account for this complexity when interpreting treatment effect estimates could lead to clinical and policy mistakes. Our objective was to assess the properties of treatment effect estimates based on choice when treatments have heterogeneous effects on both beneficial and detrimental outcomes across patients. Simulation methods were used to highlight the sensitivity of treatment effect estimates to the distributions of treatment effects across patients across outcomes. Scenarios with alternative correlations between benefit and detriment treatment effects across patients were used. Regression and instrumental variable estimators were applied to the simulated data for both outcomes. True treatment effect parameters are sensitive to the relationships of treatment effectiveness across outcomes in each study population. In each simulation scenario, treatment effect estimate interpretations for each outcome are aligned with results shown previously in single outcome models, but these estimates vary across simulated populations with the correlations of treatment effects across patients across outcomes. If estimator assumptions are valid, estimates across outcomes can be used to assess the optimality of treatment rates in a study population. However, because true treatment effect parameters are sensitive to correlations of treatment effects across outcomes, decision makers should be cautious about generalizing estimates to other populations.
Single Event Effects: Space and Atmospheric Environments
NASA Technical Reports Server (NTRS)
Barth, Janet L.
2003-01-01
The paper discusses the following: 1. Sun-Earth connections. 2. Heavy ions: galactic cosmic rays; solar particle events. 3. Protons: solar particle events; trapped. 4. Atmospheric neutrons. 5. Summary.
NASA Technical Reports Server (NTRS)
Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.; Lo, R. Y.
1987-01-01
Modeling of SEU has been done in a CMOS static RAM containing 1-micron-channel-length transistors fabricated from a p-well epilayer process using both circuit-simulation and numerical-simulation techniques. The modeling results have been experimentally verified with the aid of heavy-ion beams obtained from a three-stage tandem van de Graaff accelerator. Experimental evidence for a novel SEU mode in an ON n-channel device is presented.
An Event-Based Approach to Design a Teamwork Training Scenario and Assessment Tool in Surgery.
Nguyen, Ngan; Watson, William D; Dominguez, Edward
2016-01-01
Simulation is a technique recommended for teaching and measuring teamwork, but few published methodologies are available on how best to design simulation for teamwork training in surgery and health care in general. The purpose of this article is to describe a general methodology, called event-based approach to training (EBAT), to guide the design of simulation for teamwork training and discuss its application to surgery. The EBAT methodology draws on the science of training by systematically introducing training exercise events that are linked to training requirements (i.e., competencies being trained and learning objectives) and performance assessment. The EBAT process involves: Of the 4 teamwork competencies endorsed by the Agency for Healthcare Research Quality and Department of Defense, "communication" was chosen to be the focus of our training efforts. A total of 5 learning objectives were defined based on 5 validated teamwork and communication techniques. Diagnostic laparoscopy was chosen as the clinical context to frame the training scenario, and 29 KSAs were defined based on review of published literature on patient safety and input from subject matter experts. Critical events included those that correspond to a specific phase in the normal flow of a surgical procedure as well as clinical events that may occur when performing the operation. Similar to the targeted KSAs, targeted responses to the critical events were developed based on existing literature and gathering input from content experts. Finally, a 29-item EBAT-derived checklist was created to assess communication performance. Like any instructional tool, simulation is only effective if it is designed and implemented appropriately. It is recognized that the effectiveness of simulation depends on whether (1) it is built upon a theoretical framework, (2) it uses preplanned structured exercises or events to allow learners the opportunity to exhibit the targeted KSAs, (3) it assesses performance, and (4) it provides formative and constructive feedback to bridge the gap between the learners' KSAs and the targeted KSAs. The EBAT methodology guides the design of simulation that incorporates these 4 features and, thus, enhances training effectiveness with simulation. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warburton, Thomas Karl
2017-01-01
The Deep Underground Neutrino Experiment (DUNE) is a next-generation neutrino experiment which will be built at the Sanford Underground Research Facility (SURF), and will receive a wide-band neutrino beam from Fermilab, 1300~km away. At this baseline DUNE will be able to study many of the properties of neutrino mixing, including the neutrino mass hierarchy and the value of the CP-violating complex phase (more » $$\\delta_{CP}$$). DUNE will utilise Liquid Argon (LAr) Time Projection Chamber (TPC) (LArTPC) technology, and the Far Detector (FD) will consist of four modules, each containing 17.1~kt of LAr with a fiducial mass of around 10~kt. Each of these FD modules represents around an order of magnitude increase in size, when compared to existing LArTPC experiments. \\\\ The 35 ton detector is the first DUNE prototype for the single (LAr) phase design of the FD. There were two running periods, one from November 2013 to February 2014, and a second from November 2015 to March 2016. During t he second running period, a system of TPCs was installed, and cosmic-ray data were collected. A method of particle identification was developed using simulations, though this was not applied to the data due to the higher than expected noise level. A new method of determining the interaction time of a track, using the effects of longitudinal diffusion, was developed using the cosmic-ray data. A camera system was also installed in the detector for monitoring purposes, and to look for high voltage breakdowns. \\\\ Simulations concerning the muon-induced background rate to nucleon decay are performed, following the incorporation of the MUon Simulations UNderground (MUSUN) generator into the DUNE software framework. A series of cuts which are based on Monte Carlo truth information is developed, designed to reject simulated background events, whilst preserving simulated signal events in the $$n \\rightarrow K^{+} + e^{-}$$ decay channel. No background events are seen to survive the app lication of these cuts in a sample of 2~$$\\times$$~10$^9$ muon! s, representing 401.6~years of detector live time. This corresponds to an annual background rate of <~0.44~events$$\\cdot$$Mt$$^{-1}\\cdot$$year$$^{-1}$$ at 90\\% confidence, using a fiducial mass of 13.8~kt.« less
An iterative matching and locating technique for borehole microseismic monitoring
NASA Astrophysics Data System (ADS)
Chen, H.; Meng, X.; Niu, F.; Tang, Y.
2016-12-01
Microseismic monitoring has been proven to be an effective and valuable technology to image hydraulic fracture geometry. The success of hydraulic fracturing monitoring relies on the detection and characterization (i.e., location and focal mechanism estimation) of a maximum number of induced microseismic events. All the events are important to quantify the stimulated reservior volume (SRV) and characterize the newly created fracture network. Detecting and locating low magnitude events, however, are notoriously difficult, particularly at a high noisy production environment. Here we propose an iterative matching and locating technique (iMLT) to obtain a maximum detection of small events and the best determination of their locations from continuous data recorded by a single azimuth downhole geophone array. As the downhole array is located in one azimuth, the regular M&L using the P-wave cross-correlation only is not able to resolve the location of a matched event relative to the template event. We thus introduce the polarization direction in the matching, which significantly improve the lateral resolution of the M&L method based on numerical simulations with synthetic data. Our synthetic tests further indicate that the inclusion of S-wave cross-correlation data can help better constrain the focal depth of the matched events. We apply this method to a dataset recorded during hydraulic fracturing treatment of a pilot horizontal well within the shale play in southwest China. Our approach yields a more than fourfold increase in the number of located events, compared with the original event catalog from traditional downhole processing.
The Effects of Cognitive Readiness in a Surface Warfare Simulation
ERIC Educational Resources Information Center
Ayala, Donna
2008-01-01
This study investigated the effects of cognitive readiness in a Navy simulated environment, the simulation being the Multi-Mission Team Trainer. The research question that drove this study was: will simulations increase cognitive readiness? One of the tasks of Navy sailors is to deal with unpredictable events. Unpredictability in the military is…
NASA Technical Reports Server (NTRS)
D'Souza, Christopher; Milenkovich, Zoran; Wilson, Zachary; Huich, David; Bendle, John; Kibler, Angela
2011-01-01
The Space Operations Simulation Center (SOSC) at the Lockheed Martin (LM) Waterton Campus in Littleton, Colorado is a dynamic test environment focused on Autonomous Rendezvous and Docking (AR&D) development testing and risk reduction activities. The SOSC supports multiple program pursuits and accommodates testing Guidance, Navigation, and Control (GN&C) algorithms for relative navigation, hardware testing and characterization, as well as software and test process development. The SOSC consists of a high bay (60 meters long by 15.2 meters wide by 15.2 meters tall) with dual six degree-of-freedom (6DOF) motion simulators and a single fixed base 6DOF robot. The large testing area (maximum sensor-to-target effective range of 60 meters) allows for large-scale, flight-like simulations of proximity maneuvers and docking events. The facility also has two apertures for access to external extended-range outdoor target test operations. In addition, the facility contains four Mission Operations Centers (MOCs) with connectivity to dual high bay control rooms and a data/video interface room. The high bay is rated at Class 300,000 (. 0.5 m maximum particles/m3) cleanliness and includes orbital lighting simulation capabilities.
Performance of the SWEEP model affected by estimates of threshold friction velocity
USDA-ARS?s Scientific Manuscript database
The Wind Erosion Prediction System (WEPS) is a process-based model and needs to be verified under a broad range of climatic, soil, and management conditions. Occasional failure of the WEPS erosion submodel (Single-event Wind Erosion Evaluation Program or SWEEP) to simulate erosion in the Columbia Pl...
Radiation Effects of Commercial Resistive Random Access Memories
NASA Technical Reports Server (NTRS)
Chen, Dakai; LaBel, Kenneth A.; Berg, Melanie; Wilcox, Edward; Kim, Hak; Phan, Anthony; Figueiredo, Marco; Buchner, Stephen; Khachatrian, Ani; Roche, Nicolas
2014-01-01
We present results for the single-event effect response of commercial production-level resistive random access memories. We found that the resistive memory arrays are immune to heavy ion-induced upsets. However, the devices were susceptible to single-event functional interrupts, due to upsets from the control circuits. The intrinsic radiation tolerant nature of resistive memory makes the technology an attractive consideration for future space applications.
Single-Event Effects in Silicon Carbide Power Devices
NASA Technical Reports Server (NTRS)
Lauenstein, Jean-Marie; Casey, Megan C.; LaBel, Kenneth A.; Ikpe, Stanley; Topper, Alyson D.; Wilcox, Edward P.; Kim, Hak; Phan, Anthony M.
2015-01-01
This report summarizes the NASA Electronic Parts and Packaging Program Silicon Carbide Power Device Subtask efforts in FY15. Benefits of SiC are described and example NASA Programs and Projects desiring this technology are given. The current status of the radiation tolerance of silicon carbide power devices is given and paths forward in the effort to develop heavy-ion single-event effect hardened devices indicated.
Single event effects in pulse width modulation controllers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Penzin, S.H.; Crain, W.R.; Crawford, K.B.
1996-12-01
SEE testing was performed on pulse width modulation (PWM) controllers which are commonly used in switching mode power supply systems. The devices are designed using both Set-Reset (SR) flip-flops and Toggle (T) flip-flops which are vulnerable to single event upset (SEU) in a radiation environment. Depending on the implementation of the different devices the effect can be significant in spaceflight hardware.
Single Event Effect Hardware Trojans with Remote Activation
2017-03-01
kinetically as in the SDI approach. These high-energy directed energy weapons have been studied and developed largely for the purpose remote sensing and...Single Event Effect Hardware Trojans with Remote Activation Paul A. Quintana; John McCollum; William A. Hill Microsemi Corporation, San Jose...space qualified semiconductors the use of SEE sensitive circuits may represents a latent and remotely -triggered hardware Trojan which would be
Di Marino, Daniele; Oteri, Francesco; Morozzo Della Rocca, Blasco; Chillemi, Giovanni; Falconi, Mattia
2010-12-01
Molecular dynamics simulations of the wild type bovine ADP/ATP mitochondrial carrier, and of the single Ala113Pro and double Ala113Pro/Val180Met mutants, embedded in a lipid bilayer, have been carried out for 30ns to shed light on the structural-dynamical changes induced by the Val180Met mutation restoring the carrier function in the Ala113Pro pathologic mutant. Principal component analysis indicates that, for the three systems, the protein dynamics is mainly characterized by the motion of the matrix loops and of the odd-numbered helices having a conserved proline in their central region. Analysis of the motions shows a different behaviour of single pathological mutant with respect of the other two systems. The single mutation induces a regularization and rigidity of the H3 helix, lost upon the introduction of the second mutation. This is directly correlated to the salt bridge distribution involving residues Arg79, Asp134 and Arg234, hypothesized to interact with the substrate. In fact, in the wild type simulation two stable inter-helices salt bridges, crucial for substrate binding, are present almost over all the simulation time. In line with the impaired ADP transport, one salt interaction is lost in the single mutant trajectory but reappears in the double mutant simulation, where a salt bridge network matching the wild type is restored. Other important structural-dynamical properties, such as the trans-membrane helices mobility, analyzed via the principal component analysis, are similar for the wild type and double mutant while are different for the single mutant, providing a mechanistic explanation for their different functional properties. Copyright © 2010 Elsevier Inc. All rights reserved.
Laner, Monika; Horta, Bruno A C; Hünenberger, Philippe H
2015-02-01
The occurrence of long-timescale motions in glycerol-1-monopalmitate (GMP) lipid bilayers is investigated based on previously reported 600 ns molecular dynamics simulations of a 2×8×8 GMP bilayer patch in the temperature range 302-338 K, performed at three different hydration levels, or in the presence of the cosolutes methanol or trehalose at three different concentrations. The types of long-timescale motions considered are: (i) the possible phase transitions; (ii) the precession of the relative collective tilt-angle of the two leaflets in the gel phase; (iii) the trans-gauche isomerization of the dihedral angles within the lipid aliphatic tails; and (iv) the flipping of single lipids across the two leaflets. The results provide a picture of GMP bilayers involving a rich spectrum of events occurring on a wide range of timescales, from the 100-ps range isomerization of single dihedral angles, via the 100-ns range of tilt precession motions, to the multi-μs range of phase transitions and lipid-flipping events. Copyright © 2014 Elsevier Inc. All rights reserved.
Electron-induced single event upsets in 28 nm and 45 nm bulk SRAMs
Trippe, J. M.; Reed, R. A.; Austin, R. A.; ...
2015-12-01
In this study, we present experimental evidence of single electron-induced upsets in commercial 28 nm and 45 nm CMOS SRAMs from a monoenergetic electron beam. Upsets were observed in both technology nodes when the SRAM was operated in a low power state. The experimental cross section depends strongly on both bias and technology node feature size, consistent with previous work in which SRAMs were irradiated with low energy muons and protons. Accompanying simulations demonstrate that δ-rays produced by the primary electrons are responsible for the observed upsets. Additional simulations predict the on-orbit event rates for various Earth and Jovian environmentsmore » for a set of sensitive volumes representative of current technology nodes. The electron contribution to the total upset rate for Earth environments is significant for critical charges as high as 0.2 fC. This value is comparable to that of sub-22 nm bulk SRAMs. Similarly, for the Jovian environment, the electron-induced upset rate is larger than the proton-induced upset rate for critical charges as high as 0.3 fC.« less
Simulation of quantity and quality of storm runoff for urban catchments in Fresno, California
Guay, J.R.; Smith, P.E.
1988-01-01
Rainfall-runoff models were developed for a multiple-dwelling residential catchment (2 applications), a single-dwelling residential catchment, and a commercial catchment in Fresno, California, using the U.S. Geological Survey Distributed Routing Rainfall-Runoff Model (DR3M-II). A runoff-quality model also was developed at the commercial catchment using the Survey 's Multiple-Event Urban Runoff Quality model (DR3M-qual). The purpose of this study was: (1) to demonstrate the capabilites of the two models for use in designing storm drains, estimating the frequency of storm runoff loads, and evaluating the effectiveness of street sweeping on an urban drainage catchment; and (2) to determine the simulation accuracies of these models. Simulation errors of the two models were summarized as the median absolute deviation in percent (mad) between measured and simulated values. Calibration and verification mad errors for runoff volumes and peak discharges ranged from 14 to 20%. The estimated annual storm-runoff loads, in pounds/acre of effective impervious area, that could occur once every hundred years at the commercial catchment was 95 for dissolved solids, 1.6 for the dissolved nitrite plus nitrate, 0.31 for total recoverable lead, and 120 for suspended sediment. Calibration and verification mad errors for the above constituents ranged from 11 to 54%. (USGS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, Elizabeth J.; Yu, Sungduk; Kooperman, Gabriel J.
The sensitivities of simulated mesoscale convective systems (MCSs) in the central U.S. to microphysics and grid configuration are evaluated here in a global climate model (GCM) that also permits global-scale feedbacks and variability. Since conventional GCMs do not simulate MCSs, studying their sensitivities in a global framework useful for climate change simulations has not previously been possible. To date, MCS sensitivity experiments have relied on controlled cloud resolving model (CRM) studies with limited domains, which avoid internal variability and neglect feedbacks between local convection and larger-scale dynamics. However, recent work with superparameterized (SP) GCMs has shown that eastward propagating MCS-likemore » events are captured when embedded CRMs replace convective parameterizations. This study uses a SP version of the Community Atmosphere Model version 5 (SP-CAM5) to evaluate MCS sensitivities, applying an objective empirical orthogonal function algorithm to identify MCS-like events, and harmonizing composite storms to account for seasonal and spatial heterogeneity. A five-summer control simulation is used to assess the magnitude of internal and interannual variability relative to 10 sensitivity experiments with varied CRM parameters, including ice fall speed, one-moment and two-moment microphysics, and grid spacing. MCS sensitivities were found to be subtle with respect to internal variability, and indicate that ensembles of over 100 storms may be necessary to detect robust differences in SP-GCMs. Furthermore, these results emphasize that the properties of MCSs can vary widely across individual events, and improving their representation in global simulations with significant internal variability may require comparison to long (multidecadal) time series of observed events rather than single season field campaigns.« less
Velleux, Mark L; Julien, Pierre Y; Rojas-Sanchez, Rosalia; Clements, William H; England, John F
2006-11-15
The transport and toxicity of metals at the California Gulch, Colorado mine-impacted watershed were simulated with a spatially distributed watershed model. Using a database of observations for the period 1984-2004, hydrology, sediment transport, and metals transport were simulated for a June 2003 calibration event and a September 2003 validation event. Simulated flow volumes were within approximately 10% of observed conditions. Observed ranges of total suspended solids, cadmium, copper, and zinc concentrations were also successfully simulated. The model was then used to simulate the potential impacts of a 1-in-100-year rainfall event. Driven by large flows and corresponding soil and sediment erosion for the 1-in-100-year event, estimated solids and metals export from the watershed is 10,000 metric tons for solids, 215 kg for Cu, 520 kg for Cu, and 15,300 kg for Zn. As expressed by the cumulative criterion unit (CCU) index, metals concentrations far exceed toxic effects thresholds, suggesting a high probability of toxic effects downstream of the gulch. More detailed Zn source analyses suggest that much of the Zn exported from the gulch originates from slag piles adjacent to the lower gulch floodplain and an old mining site located near the head of the lower gulch.
Thorlund, Kristian; Imberger, Georgina; Walsh, Michael; Chu, Rong; Gluud, Christian; Wetterslev, Jørn; Guyatt, Gordon; Devereaux, Philip J.; Thabane, Lehana
2011-01-01
Background Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. Methods We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. Results The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. Conclusions Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation. PMID:22028777
Coblentz, W K; Muck, R E
2012-11-01
The frustrations of forage producers attempting to conserve high-quality alfalfa (Medicago sativa L.) silage during periods of unstable or inclement weather are widely known. Our objectives for this series of studies were to (1) assess indicators of ensilability, such as pH, buffering capacity, water-soluble carbohydrates (WSC), and starch for wilting alfalfa forages receiving no rainfall or damaged by simulated or natural rainfall events; (2) use these data as inputs to calculate the threshold moisture concentration that would prohibit a clostridially dominated fermentation; and (3) further evaluate the effects of rain damage or no rain damage on measures of forage nutritive value. Rainfall events were applied to wilting forages by both simulated and natural methods over multiple studies distributed across 4 independent forage harvests. Generally, simulated rainfall was applied to alfalfa under controlled conditions in which forages were relatively wet at the time of application, and subsequently were dried to final moisture endpoints under near ideal conditions within a constant temperature/humidity environmental chamber, thereby limiting postwetting wilting time to ≤21 h. As a result, indicators of ensilability, as well as measures of nutritive value, changed only marginally as a result of treatment. Consistently, reductions in concentrations of WSC and starch occurred, but changes in WSC were relatively modest, and postwetting concentrations of WSC may have been buoyed by hydrolysis of starch. When forages were subjected to natural rainfall events followed by prolonged exposure under field conditions, indicators of ensilability were much less desirable. In one study in which alfalfa received 49.3mm of natural rainfall over a prolonged (8-d) field-exposure period, fresh pH increased from 6.48 to 7.43 within all forages exposed to these extended, moist wilting conditions. Furthermore, sharp reductions were observed in buffering capacity (410 vs. 337 meq/kg of DM), WSC (6.13 vs. 2.90%), starch (2.28 vs. 0.45%), and clostridially dominated fermentation (62.7 vs. 59.4%). Based on these experiments, the potential for good fermentation is affected only minimally by single rainfall events applied to relatively wet forages, provided these events are followed by rapid dehydration; however, attaining acceptable silage fermentations with forages subjected to prolonged exposure under poor drying conditions is likely to be far more problematic. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Analyzing Single-Event Gate Ruptures In Power MOSFET's
NASA Technical Reports Server (NTRS)
Zoutendyk, John A.
1993-01-01
Susceptibilities of power metal-oxide/semiconductor field-effect transistors (MOSFET's) to single-event gate ruptures analyzed by exposing devices to beams of energetic bromine ions while applying appropriate bias voltages to source, gate, and drain terminals and measuring current flowing into or out of each terminal.
LUXSim: A component-centric approach to low-background simulations
Akerib, D. S.; Bai, X.; Bedikian, S.; ...
2012-02-13
Geant4 has been used throughout the nuclear and high-energy physics community to simulate energy depositions in various detectors and materials. These simulations have mostly been run with a source beam outside the detector. In the case of low-background physics, however, a primary concern is the effect on the detector from radioactivity inherent in the detector parts themselves. From this standpoint, there is no single source or beam, but rather a collection of sources with potentially complicated spatial extent. LUXSim is a simulation framework used by the LUX collaboration that takes a component-centric approach to event generation and recording. A newmore » set of classes allows for multiple radioactive sources to be set within any number of components at run time, with the entire collection of sources handled within a single simulation run. Various levels of information can also be recorded from the individual components, with these record levels also being set at runtime. This flexibility in both source generation and information recording is possible without the need to recompile, reducing the complexity of code management and the proliferation of versions. Within the code itself, casting geometry objects within this new set of classes rather than as the default Geant4 classes automatically extends this flexibility to every individual component. No additional work is required on the part of the developer, reducing development time and increasing confidence in the results. Here, we describe the guiding principles behind LUXSim, detail some of its unique classes and methods, and give examples of usage.« less
NASA Technical Reports Server (NTRS)
Scheick, Leif
2014-01-01
Recent testing of the EPC1000 series eGaN FETs has shown sensitivity to Single Event Effects (SEE) that are destructive. These effects are most likely the failure of the very thin gate structure in HEMT architecture. EPC has recently changed the doping of the substrate to improve the performance and the SEE response. This testing compares the SEE response of both devices.
Decline in Radiation Hardened Microcircuit Infrastructure
NASA Technical Reports Server (NTRS)
LaBel, Kenneth A.
2015-01-01
Two areas of radiation hardened microcircuit infrastructure will be discussed: 1) The availability and performance of radiation hardened microcircuits, and, and 2) The access to radiation test facilities primarily for proton single event effects (SEE) testing. Other areas not discussed, but are a concern include: The challenge for maintaining radiation effects tool access for assurance purposes, and, the access to radiation test facilities primarily for heavy ion single event effects (SEE) testing. Status and implications will be discussed for each area.
Diagnosis of NMOS DRAM functional performance as affected by a picosecond dye laser
NASA Technical Reports Server (NTRS)
Kim, Q.; Schwartz, H. R.; Edmonds, L. D.; Zoutendyk, J. A.
1992-01-01
A picosec pulsed dye laser beam was at selected wavelengths successfully used to simulate heavy-ion single-event effects (SEEs) in negative channel NMOS DRAMs. A DRAM was used to develop the test technique because bit-mapping capability and previous heavy-ion upset data were available. The present analysis is the first to establish such a correlation between laser and heavy-ion data for devices, such as the NMOS DRAM, where charge collection is dominated by long-range diffusion, which is controlled by carrier density at remote distances from a depletion region. In the latter case, penetration depth is an important parameter and is included in the present analysis. A single-pulse picosecond dye laser beam (1.5 microns diameter) focused onto a single cell component can upset a single memory cell; clusters of memory cell upsets (multiple errors) were observed when the laser energy was increased above the threshold energy. The multiple errors were analyzed as a function of the bias voltage and total energy of a single pulse. A diffusion model to distinguish the multiple upsets from the laser-induced charge agreed well with previously reported heavy ion data.
Supporting Beacon and Event-Driven Messages in Vehicular Platoons through Token-Based Strategies
Uhlemann, Elisabeth
2018-01-01
Timely and reliable inter-vehicle communications is a critical requirement to support traffic safety applications, such as vehicle platooning. Furthermore, low-delay communications allow the platoon to react quickly to unexpected events. In this scope, having a predictable and highly effective medium access control (MAC) method is of utmost importance. However, the currently available IEEE 802.11p technology is unable to adequately address these challenges. In this paper, we propose a MAC method especially adapted to platoons, able to transmit beacons within the required time constraints, but with a higher reliability level than IEEE 802.11p, while concurrently enabling efficient dissemination of event-driven messages. The protocol circulates the token within the platoon not in a round-robin fashion, but based on beacon data age, i.e., the time that has passed since the previous collection of status information, thereby automatically offering repeated beacon transmission opportunities for increased reliability. In addition, we propose three different methods for supporting event-driven messages co-existing with beacons. Analysis and simulation results in single and multi-hop scenarios showed that, by providing non-competitive channel access and frequent retransmission opportunities, our protocol can offer beacon delivery within one beacon generation interval while fulfilling the requirements on low-delay dissemination of event-driven messages for traffic safety applications. PMID:29570676
Supporting Beacon and Event-Driven Messages in Vehicular Platoons through Token-Based Strategies.
Balador, Ali; Uhlemann, Elisabeth; Calafate, Carlos T; Cano, Juan-Carlos
2018-03-23
Timely and reliable inter-vehicle communications is a critical requirement to support traffic safety applications, such as vehicle platooning. Furthermore, low-delay communications allow the platoon to react quickly to unexpected events. In this scope, having a predictable and highly effective medium access control (MAC) method is of utmost importance. However, the currently available IEEE 802.11p technology is unable to adequately address these challenges. In this paper, we propose a MAC method especially adapted to platoons, able to transmit beacons within the required time constraints, but with a higher reliability level than IEEE 802.11p, while concurrently enabling efficient dissemination of event-driven messages. The protocol circulates the token within the platoon not in a round-robin fashion, but based on beacon data age, i.e., the time that has passed since the previous collection of status information, thereby automatically offering repeated beacon transmission opportunities for increased reliability. In addition, we propose three different methods for supporting event-driven messages co-existing with beacons. Analysis and simulation results in single and multi-hop scenarios showed that, by providing non-competitive channel access and frequent retransmission opportunities, our protocol can offer beacon delivery within one beacon generation interval while fulfilling the requirements on low-delay dissemination of event-driven messages for traffic safety applications.
Hybrid stochastic and deterministic simulations of calcium blips.
Rüdiger, S; Shuai, J W; Huisinga, W; Nagaiah, C; Warnecke, G; Parker, I; Falcke, M
2007-09-15
Intracellular calcium release is a prime example for the role of stochastic effects in cellular systems. Recent models consist of deterministic reaction-diffusion equations coupled to stochastic transitions of calcium channels. The resulting dynamics is of multiple time and spatial scales, which complicates far-reaching computer simulations. In this article, we introduce a novel hybrid scheme that is especially tailored to accurately trace events with essential stochastic variations, while deterministic concentration variables are efficiently and accurately traced at the same time. We use finite elements to efficiently resolve the extreme spatial gradients of concentration variables close to a channel. We describe the algorithmic approach and we demonstrate its efficiency compared to conventional methods. Our single-channel model matches experimental data and results in intriguing dynamics if calcium is used as charge carrier. Random openings of the channel accumulate in bursts of calcium blips that may be central for the understanding of cellular calcium dynamics.
Single-event effects in avionics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Normand, E.
1996-04-01
The occurrence of single-event upset (SEU) in aircraft electronics has evolved from a series of interesting anecdotal incidents to accepted fact. A study completed in 1992 demonstrated that SEU`s are real, that the measured in-flight rates correlate with the atmospheric neutron flux, and that the rates can be calculated using laboratory SEU data. Once avionics DEU was shown to be an actual effect, it had to be dealt with in avionics designs. The major concern is in random access memories (RAM`s), both static (SRAM`s) and dynamic (DRAM`s), because these microelectronic devices contain the largest number of bits, but other parts,more » such as microprocessors, are also potentially susceptible to upset. In addition, other single-event effects (SEE`s), specifically latch-up and burnout, can also be induced by atmospheric neutrons.« less
Quantum Corrections to the 'Atomistic' MOSFET Simulations
NASA Technical Reports Server (NTRS)
Asenov, Asen; Slavcheva, G.; Kaya, S.; Balasubramaniam, R.
2000-01-01
We have introduced in a simple and efficient manner quantum mechanical corrections in our 3D 'atomistic' MOSFET simulator using the density gradient formalism. We have studied in comparison with classical simulations the effect of the quantum mechanical corrections on the simulation of random dopant induced threshold voltage fluctuations, the effect of the single charge trapping on interface states and the effect of the oxide thickness fluctuations in decanano MOSFETs with ultrathin gate oxides. The introduction of quantum corrections enhances the threshold voltage fluctuations but does not affect significantly the amplitude of the random telegraph noise associated with single carrier trapping. The importance of the quantum corrections for proper simulation of oxide thickness fluctuation effects has also been demonstrated.
Erosion of tungsten armor after multiple intense transient events in ITER
NASA Astrophysics Data System (ADS)
Bazylev, B. N.; Janeschitz, G.; Landman, I. S.; Pestchanyi, S. E.
2005-03-01
Macroscopic erosion by melt motion is the dominating damage mechanism for tungsten armour under high-heat loads with energy deposition W > 1 MJ/m 2 and τ > 0.1 ms. For ITER divertor armour the results of a fluid dynamics simulation of the melt motion erosion after repetitive stochastically varying plasma heat loads of consecutive disruptions interspaced by ELMs are presented. The heat loads for particular single transient events are numerically simulated using the two-dimensional MHD code FOREV-2D. The whole melt motion is calculated by the fluid dynamics code MEMOS-1.5D. In addition for the ITER dome melt motion erosion of tungsten armour caused by the lateral radiation impact from the plasma shield at the disruption and ELM heat loads is estimated.
Joint independent component analysis for simultaneous EEG-fMRI: principle and simulation.
Moosmann, Matthias; Eichele, Tom; Nordby, Helge; Hugdahl, Kenneth; Calhoun, Vince D
2008-03-01
An optimized scheme for the fusion of electroencephalography and event related potentials with functional magnetic resonance imaging (BOLD-fMRI) data should simultaneously assess all available electrophysiologic and hemodynamic information in a common data space. In doing so, it should be possible to identify features of latent neural sources whose trial-to-trial dynamics are jointly reflected in both modalities. We present a joint independent component analysis (jICA) model for analysis of simultaneous single trial EEG-fMRI measurements from multiple subjects. We outline the general idea underlying the jICA approach and present results from simulated data under realistic noise conditions. Our results indicate that this approach is a feasible and physiologically plausible data-driven way to achieve spatiotemporal mapping of event related responses in the human brain.
SEPEM: A tool for statistical modeling the solar energetic particle environment
NASA Astrophysics Data System (ADS)
Crosby, Norma; Heynderickx, Daniel; Jiggens, Piers; Aran, Angels; Sanahuja, Blai; Truscott, Pete; Lei, Fan; Jacobs, Carla; Poedts, Stefaan; Gabriel, Stephen; Sandberg, Ingmar; Glover, Alexi; Hilgers, Alain
2015-07-01
Solar energetic particle (SEP) events are a serious radiation hazard for spacecraft as well as a severe health risk to humans traveling in space. Indeed, accurate modeling of the SEP environment constitutes a priority requirement for astrophysics and solar system missions and for human exploration in space. The European Space Agency's Solar Energetic Particle Environment Modelling (SEPEM) application server is a World Wide Web interface to a complete set of cross-calibrated data ranging from 1973 to 2013 as well as new SEP engineering models and tools. Both statistical and physical modeling techniques have been included, in order to cover the environment not only at 1 AU but also in the inner heliosphere ranging from 0.2 AU to 1.6 AU using a newly developed physics-based shock-and-particle model to simulate particle flux profiles of gradual SEP events. With SEPEM, SEP peak flux and integrated fluence statistics can be studied, as well as durations of high SEP flux periods. Furthermore, effects tools are also included to allow calculation of single event upset rate and radiation doses for a variety of engineering scenarios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Higginson, Drew P.
Here, we describe and justify a full-angle scattering (FAS) method to faithfully reproduce the accumulated differential angular Rutherford scattering probability distribution function (pdf) of particles in a plasma. The FAS method splits the scattering events into two regions. At small angles it is described by cumulative scattering events resulting, via the central limit theorem, in a Gaussian-like pdf; at larger angles it is described by single-event scatters and retains a pdf that follows the form of the Rutherford differential cross-section. The FAS method is verified using discrete Monte-Carlo scattering simulations run at small timesteps to include each individual scattering event.more » We identify the FAS regime of interest as where the ratio of temporal/spatial scale-of-interest to slowing-down time/length is from 10 -3 to 0.3–0.7; the upper limit corresponds to Coulomb logarithm of 20–2, respectively. Two test problems, high-velocity interpenetrating plasma flows and keV-temperature ion equilibration, are used to highlight systems where including FAS is important to capture relevant physics.« less
Higginson, Drew P.
2017-08-12
Here, we describe and justify a full-angle scattering (FAS) method to faithfully reproduce the accumulated differential angular Rutherford scattering probability distribution function (pdf) of particles in a plasma. The FAS method splits the scattering events into two regions. At small angles it is described by cumulative scattering events resulting, via the central limit theorem, in a Gaussian-like pdf; at larger angles it is described by single-event scatters and retains a pdf that follows the form of the Rutherford differential cross-section. The FAS method is verified using discrete Monte-Carlo scattering simulations run at small timesteps to include each individual scattering event.more » We identify the FAS regime of interest as where the ratio of temporal/spatial scale-of-interest to slowing-down time/length is from 10 -3 to 0.3–0.7; the upper limit corresponds to Coulomb logarithm of 20–2, respectively. Two test problems, high-velocity interpenetrating plasma flows and keV-temperature ion equilibration, are used to highlight systems where including FAS is important to capture relevant physics.« less
Modeling four occurred debris flow events in the Dolomites area (North-Eastern Italian Alps)
NASA Astrophysics Data System (ADS)
Boreggio, Mauro; Gregoretti, Carlo; Degetto, Massimo; Bernard, Martino
2016-04-01
Four occurred debris flows in the Dolomites area (North-Eastern Italian Alps) are modeled by back-analysis. The four debris flows events are those occurred at Rio Lazer (Trento) on the 4th of November 1966, at Fiames (Belluno) on the 5th of July 2006, at Rovina di Cancia (Belluno) on the 18th of July 2009 and at Rio Val Molinara (Trento) on the 15th of August 2010. In all the events, runoff entrained sediments present on natural channels and formed a solid-liquid wave that routed downstream. The first event concerns the routing of debris flow on an inhabited fan. The second event the deviation of debris flow from the usual path due to an obstruction with the excavation of a channel in the scree and the downstream spreading in a wood. The third event concerns the routing of debris flow in a channel with an ending the reservoir, its overtopping and final spreading in the inhabited area. The fourth event concerns the routing of debris flow along the main channel downstream the initiation area until spreading just upstream a village. All the four occurred debris flows are simulated by modeling runoff that entrained debris flow for determining the solid-liquid hydrograph. The routing of the solid-liquid hydrograph is simulated by a bi-phase cell model based on the kinematic approach. The comparison between simulated and measured erosion and deposition depths is satisfactory. Nearly the same parameters for computing erosion and deposition were used for all the four occurred events. The maps of erosion and deposition depths are obtained by comparing the results of post-event surveys with the pre-event DEM. The post-event surveys were conducted by using different instruments (LiDAR and GPS) or the combination photos-single points depth measurements (in this last case it is possible obtaining the deposition/erosion depths by means of stereoscopy techniques).
Ground-motion signature of dynamic ruptures on rough faults
NASA Astrophysics Data System (ADS)
Mai, P. Martin; Galis, Martin; Thingbaijam, Kiran K. S.; Vyas, Jagdish C.
2016-04-01
Natural earthquakes occur on faults characterized by large-scale segmentation and small-scale roughness. This multi-scale geometrical complexity controls the dynamic rupture process, and hence strongly affects the radiated seismic waves and near-field shaking. For a fault system with given segmentation, the question arises what are the conditions for producing large-magnitude multi-segment ruptures, as opposed to smaller single-segment events. Similarly, for variable degrees of roughness, ruptures may be arrested prematurely or may break the entire fault. In addition, fault roughness induces rupture incoherence that determines the level of high-frequency radiation. Using HPC-enabled dynamic-rupture simulations, we generate physically self-consistent rough-fault earthquake scenarios (M~6.8) and their associated near-source seismic radiation. Because these computations are too expensive to be conducted routinely for simulation-based seismic hazard assessment, we thrive to develop an effective pseudo-dynamic source characterization that produces (almost) the same ground-motion characteristics. Therefore, we examine how variable degrees of fault roughness affect rupture properties and the seismic wavefield, and develop a planar-fault kinematic source representation that emulates the observed dynamic behaviour. We propose an effective workflow for improved pseudo-dynamic source modelling that incorporates rough-fault effects and its associated high-frequency radiation in broadband ground-motion computation for simulation-based seismic hazard assessment.
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.; Brucker, G. J.; Calvel, P.; Baiget, A.; Peyrotte, C.; Gaillard, R.
1992-01-01
The transport, energy loss, and charge production of heavy ions in the sensitive regions of IRF 150 power MOSFETs are described. The dependence and variation of transport parameters with ion type and energy relative to the requirements for single event burnout in this part type are discussed. Test data taken with this power MOSFET are used together with analyses by means of a computer code of the ion energy loss and charge production in the device to establish criteria for burnout and parameters for space predictions. These parameters are then used in an application to predict burnout rates in a geostationary orbit for power converters operating in a dynamic mode. Comparisons of rates for different geometries in simulating SEU (single event upset) sensitive volumes are presented.
Parallel discrete-event simulation of FCFS stochastic queueing networks
NASA Technical Reports Server (NTRS)
Nicol, David M.
1988-01-01
Physical systems are inherently parallel. Intuition suggests that simulations of these systems may be amenable to parallel execution. The parallel execution of a discrete-event simulation requires careful synchronization of processes in order to ensure the execution's correctness; this synchronization can degrade performance. Largely negative results were recently reported in a study which used a well-known synchronization method on queueing network simulations. Discussed here is a synchronization method (appointments), which has proven itself to be effective on simulations of FCFS queueing networks. The key concept behind appointments is the provision of lookahead. Lookahead is a prediction on a processor's future behavior, based on an analysis of the processor's simulation state. It is shown how lookahead can be computed for FCFS queueing network simulations, give performance data that demonstrates the method's effectiveness under moderate to heavy loads, and discuss performance tradeoffs between the quality of lookahead, and the cost of computing lookahead.
Piggot, Thomas J; Sessions, Richard B; Burston, Steven G
2012-02-28
GroEL, along with its coprotein GroES, is essential for ensuring the correct folding of unfolded or newly synthesized proteins in bacteria. GroEL is a complex, allosteric molecule, composed of two heptameric rings stacked back to back, that undergoes large structural changes during its reaction cycle. These structural changes are driven by the cooperative binding and subsequent hydrolysis of ATP, by GroEL. Despite numerous previous studies, the precise mechanisms of allosteric communication and the associated structural changes remain elusive. In this paper, we describe a series of all-atom, unbiased, molecular dynamics simulations over relatively long (50-100 ns) time scales of a single, isolated GroEL subunit and also a heptameric GroEL ring, in the presence and absence of ATP. Combined with results from a distance restraint-biased simulation of the single ring, the atomistic details of the earliest stages of ATP-driven structural changes within this complex molecule are illuminated. Our results are in broad agreement with previous modeling studies of isolated subunits and with a coarse-grained, forcing simulation of the single ring. These are the first reported all-atom simulations of the GroEL single-ring complex and provide a unique insight into the role of charged residues K80, K277, R284, R285, and E388 at the subunit interface in transmission of the allosteric signal. These simulations also demonstrate the feasibility of performing all-atom simulations of very large systems on sufficiently long time scales on typical high performance computing facilities to show the origins of the earliest events in biologically relevant processes.
Ottsen, Christina Lundsgaard; Berntsen, Dorthe
2015-12-01
Mental time travel is the ability to remember past events and imagine future events. Here, 124 Middle Easterners and 128 Scandinavians generated important past and future events. These different societies present a unique opportunity to examine effects of culture. Findings indicate stronger influence of normative schemas and greater use of mental time travel to teach, inform and direct behaviour in the Middle East compared with Scandinavia. The Middle Easterners generated more events that corresponded to their cultural life script and that contained religious words, whereas the Scandinavians reported events with a more positive mood impact. Effects of gender were mainly found in the Middle East. Main effects of time orientation largely replicated recent findings showing that simulation of future and past events are not necessarily parallel processes. In accordance with the notion that future simulations rely on schema-based construction, important future events showed a higher overlap with life script events than past events in both cultures. In general, cross-cultural discrepancies were larger in future compared with past events. Notably, the high focus in the Middle East on sharing future events to give cultural guidance is consistent with the increased adherence to normative scripts found in this culture. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Z; Gao, M
Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster softwaremore » developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.« less
NASA Astrophysics Data System (ADS)
Gonzalez Lazo, Eduardo; Cruz Inclán, Carlos M.; Rodríguez Rodríguez, Arturo; Guzmán Martínez, Fernando; Abreu Alfonso, Yamiel; Piñera Hernández, Ibrahin; Leyva Fabelo, Antonio
2017-09-01
A primary approach for evaluating the influence of point defects like vacancies on atom displacement threshold energies values Td in BaTiO3 is attempted. For this purpose Molecular Dynamics Methods, MD, were applied based on previous Td calculations on an ideal tetragonal crystalline structure. It is an important issue in achieving more realistic simulations of radiation damage effects in BaTiO3 ceramic materials. It also involves irradiated samples under severe radiation damage effects due to high fluency expositions. In addition to the above mentioned atom displacement events supported by a single primary knock-on atom, PKA, a new mechanism was introduced. It corresponds to the simultaneous excitation of two close primary knock-on atoms in BaTiO3, which might take place under a high flux irradiation. Therefore, two different BaTiO3 Td MD calculation trials were accomplished. Firstly, single PKA excitations in a defective BaTiO3 tetragonal crystalline structure, consisting in a 2×2×2 BaTiO3 perovskite like super cell, were considered. It contains vacancies on Ba and O atomic positions under the requirements of electrical charge balance. Alternatively, double PKA excitations in a perfect BaTiO3 tetragonal unit cell were also simulated. On this basis, the corresponding primary knock-on atom (PKA) defect formation probability functions were calculated at principal crystal directions, and compared with the previous one we calculated and reported at an ideal BaTiO3 tetrahedral crystal structure. As a general result, a diminution of Td values arises in present calculations in comparison with those calculated for single PKA excitation in an ideal BaTiO3 crystal structure.
Revealing the Effects of Nanoscale Membrane Curvature on Lipid Mobility.
Kabbani, Abir Maarouf; Woodward, Xinxin; Kelly, Christopher V
2017-10-18
Recent advances in nanoengineering and super-resolution microscopy have enabled new capabilities for creating and observing membrane curvature. However, the effects of curvature on single-lipid diffusion have yet to be revealed. The simulations presented here describe the capabilities of varying experimental methods for revealing the effects of nanoscale curvature on single-molecule mobility. Traditionally, lipid mobility is revealed through fluorescence recovery after photobleaching (FRAP), fluorescence correlation spectroscopy (FCS), and single particle tracking (SPT). However, these techniques vary greatly in their ability to detect the effects of nanoscale curvature on lipid behavior. Traditionally, FRAP and FCS depend on diffraction-limited illumination and detection. A simulation of FRAP shows minimal effects on lipids diffusion due to a 50 nm radius membrane bud. Throughout the stages of the budding process, FRAP detected minimal changes in lipid recovery time due to the curvature versus flat membrane. Simulated FCS demonstrated small effects due to a 50 nm radius membrane bud that was more apparent with curvature-dependent lipid mobility changes. However, SPT achieves a sub-diffraction-limited resolution of membrane budding and lipid mobility through the identification of the single-lipid positions with ≤15 nm spatial and ≤20 ms temporal resolution. By mapping the single-lipid step lengths to locations on the membrane, the effects of membrane topography and curvature could be correlated to the effective membrane viscosity. Single-fluorophore localization techniques, such SPT, can detect membrane curvature and its effects on lipid behavior. These simulations and discussion provide a guideline for optimizing the experimental procedures in revealing the effects of curvature on lipid mobility and effective local membrane viscosity.
Meson exchange current (MEC) models in neutrino interaction generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katori, Teppei
2015-05-15
Understanding of the so-called 2 particle-2 hole (2p-2h) effect is an urgent program in neutrino interaction physics for current and future oscillation experiments. Such processes are believed to be responsible for the event excesses observed by recent neutrino experiments. The 2p-2h effect is dominated by the meson exchange current (MEC), and is accompanied by a 2-nucleon emission from the primary vertex, instead of a single nucleon emission from the charged-current quasi-elastic (CCQE) interaction. Current and future high resolution experiments can potentially nail down this effect. For this reason, there are world wide efforts to model and implement this process inmore » neutrino interaction simulations. In these proceedings, I would like to describe how this channel is modeled in neutrino interaction generators.« less
NASA Astrophysics Data System (ADS)
Charbonnier, S. J.; Gertisser, R.
2009-10-01
We present Titan2D simulations of two well-characterized block-and-ash flow (BAF) events of the 2006 eruption of Merapi (Java, Indonesia) that affected the Gendol valley on the volcano’s southern flank and adjacent, densely populated interfluve (non-valley) areas: (1) a single dome-collapse event to the south that generated one of the smaller, post-June 14 flows and (2) a sustained, multiple dome-collapse event, also directed to the south, that produced the largest flows of the 2006 eruption emplaced in the afternoon of June 14. Using spatially varying bed friction angles, Titan2D is capable of reproducing the paths, velocities, runout distance, areas covered and deposited volumes of these flows over highly complex topography. The model results provide the basis for estimating the areas and levels of hazards associated with BAFs generated during relatively short as well as prolonged dome-collapse periods and guidance during future eruptive crises at Merapi.
Particle Identification in Nuclear Emulsion by Measuring Multiple Coulomb Scattering
NASA Astrophysics Data System (ADS)
Than Tint, Khin; Nakazawa, Kazuma; Yoshida, Junya; Kyaw Soe, Myint; Mishina, Akihiro; Kinbara, Shinji; Itoh, Hiroki; Endo, Yoko; Kobayashi, Hidetaka; E07 Collaboration
2014-09-01
We are developing particle identification techniques for single charged particles such as Xi, proton, K and π by measuring multiple Coulomb scattering in nuclear emulsion. Nuclear emulsion is the best three dimensional detector for double strangeness (S = -2) nuclear system. We expect to accumulate about 10000 Xi-minus stop events which produce double lambda hypernucleus in J-PARC E07 emulsion counter hybrid experiment. The purpose of this particle identification (PID) in nuclear emulsion is to purify Xi-minus stop events which gives information about production probability of double hypernucleus and branching ratio of decay mode. Amount of scattering parameterized as angular distribution and second difference is inversely proportional to the momentum of particle. We produced several thousands of various charged particle tracks in nuclear emulsion stack via Geant4 simulation. In this talk, PID with some measuring methods for multiple scattering will be discussed by comparing with simulation data and real Xi-minus stop events in KEK-E373 experiment.
NASA Technical Reports Server (NTRS)
Slassi-Sennou, S. A.; Boggs, S. E.; Feffer, P. T.; Lin, R. P.
1997-01-01
Pulse Shape Discrimination (PSD) for background reduction will be used in the INTErnational Gamma Ray Astrophysics Laboratory (INTEGRAL) imaging spectrometer (SPI) to improve the sensitivity from 200 keV to 2 MeV. The observation of significant astrophysical gamma ray lines in this energy range is expected, where the dominant component of the background is the beta(sup -) decay in the Ge detectors due to the activation of Ge nuclei by cosmic rays. The sensitivity of the SPI will be improved by rejecting beta(sup -) decay events while retaining photon events. The PSD technique will distinguish between single and multiple site events. Simulation results of PSD for INTEGRAL-type Ge detectors using a numerical model for pulse shape generation are presented. The model was shown to agree with the experimental results for a narrow inner bore closed end cylindrical detector. Using PSD, a sensitivity improvement factor of the order of 2.4 at 0.8 MeV is expected.
Using HFire for spatial modeling of fire in shrublands
Seth H. Peterson; Marco E. Morais; Jean M. Carlson; Philip E. Dennison; Dar A. Roberts; Max A. Moritz; David R. Weise
2009-01-01
An efficient raster fire-spread model named HFire is introduced. HFire can simulate single-fire events or long-term fire regimes, using the same fire-spread algorithm. This paper describes the HFire algorithm, benchmarks the model using a standard set of tests developed for FARSITE, and compares historical and predicted fire spread perimeters for three southern...
Quantifying the Precipitation Loss of Radiation Belt Electrons During a Rapid Dropout Event
NASA Astrophysics Data System (ADS)
Pham, K. H.; Tu, W.; Xiang, Z.
2017-10-01
Relativistic electron flux in the radiation belt can drop by orders of magnitude within the timespan of hours. In this study, we used the drift-diffusion model that includes azimuthal drift and pitch angle diffusion of electrons to simulate low-altitude electron distribution observed by POES/MetOp satellites for rapid radiation belt electron dropout event occurring on 1 May 2013. The event shows fast dropout of MeV energy electrons at L > 4 over a few hours, observed by the Van Allen Probes mission. By simulating the electron distributions observed by multiple POES satellites, we resolve the precipitation loss with both high spatial and temporal resolutions and a range of energies. We estimate the pitch angle diffusion coefficients as a function of energy, pitch angle, and L-shell and calculate corresponding electron lifetimes during the event. The simulation results show fast electron precipitation loss at L > 4 during the electron dropout, with estimated electron lifetimes on the order of half an hour for MeV energies. The electron loss rate shows strong energy dependence with faster loss at higher energies, which suggest that this dropout event is dominated by quick and localized scattering process that prefers higher energy electrons. The improved temporal and spatial resolutions of electron precipitation rates provided by multiple low-altitude observations can resolve fast-varying electron loss during rapid electron dropouts (over a few hours), which occur too fast for a single low-altitude satellite. The capability of estimating the fast-varying electron lifetimes during rapid dropout events is an important step in improving radiation belt model accuracy.
Effects of a simulated agricultural runoff event on sediment toxicity in a managed backwater wetland
USDA-ARS?s Scientific Manuscript database
permethrin (both cis and trans isomers), on 10-day sediment toxicity to Hyalella azteca in a managed natural backwater wetland after a simulated agricultural runoff event. Sediment samples were collected at 10, 40, 100, 300, and 500 m from inflow 13 days prior to amendment and 1, 5, 12, 22, and 36 ...
Measurements and simulations of water transport in maize plants
NASA Astrophysics Data System (ADS)
Heinlein, Florian; Klein, Christian; Thieme, Christoph; Priesack, Eckart
2017-04-01
In Central Europe climate change will become manifest in the increase of extreme weather events like flash floods, heat waves and summer droughts, and in a shift of precipitation towards winter months. Therefore, regional water availability will alter which has an effect on future crop growth, water use efficiency and yields. To better estimate these effects accurate model descriptions of transpiration and other parts of the water balance are important. In this study, we determined transpiration of four maize plants on a field of the research station Scheyern (about 40km North of Munich) by means of sap flow measurement devices (ICQ International Pty Ltd, Australia) using the Heat-Ratio-Method: two temperature probes, 0.5 cm above and below a heater, detect a heat pulse and its speed which facilitates the calculation of sap flow. Additionally, high resolution changes of stem diameters were measured with dendrometers (DD-S, Ecomatik). The field was also situated next to an eddy covariance station which provided latent heat fluxes from the soil-plant system. We also performed terrestrial laser scans of the respective plants to extract the plant architectures. These structures serve as input for our mechanistic transpiration model simulating the water transport within the plant. This model, which has already been successfully applied to single Fagus sylvatica L. trees, was adapted to agricultural plants such as maize. The basic principle of this model is to solve a 1-D Richards equation along the graph of the single plants. A comparison between the simulations and the measurements is presented and discussed.
Synthetic earthquake catalogs simulating seismic activity in the Corinth Gulf, Greece, fault system
NASA Astrophysics Data System (ADS)
Console, Rodolfo; Carluccio, Roberto; Papadimitriou, Eleftheria; Karakostas, Vassilis
2015-01-01
The characteristic earthquake hypothesis is the basis of time-dependent modeling of earthquake recurrence on major faults. However, the characteristic earthquake hypothesis is not strongly supported by observational data. Few fault segments have long historical or paleoseismic records of individually dated ruptures, and when data and parameter uncertainties are allowed for, the form of the recurrence distribution is difficult to establish. This is the case, for instance, of the Corinth Gulf Fault System (CGFS), for which documents about strong earthquakes exist for at least 2000 years, although they can be considered complete for M ≥ 6.0 only for the latest 300 years, during which only few characteristic earthquakes are reported for individual fault segments. The use of a physics-based earthquake simulator has allowed the production of catalogs lasting 100,000 years and containing more than 500,000 events of magnitudes ≥ 4.0. The main features of our simulation algorithm are (1) an average slip rate released by earthquakes for every single segment in the investigated fault system, (2) heuristic procedures for rupture growth and stop, leading to a self-organized earthquake magnitude distribution, (3) the interaction between earthquake sources, and (4) the effect of minor earthquakes in redistributing stress. The application of our simulation algorithm to the CGFS has shown realistic features in time, space, and magnitude behavior of the seismicity. These features include long-term periodicity of strong earthquakes, short-term clustering of both strong and smaller events, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the higher-magnitude range.
Impact of dust deposition on the albedo of Vatnajökull ice cap, Iceland
NASA Astrophysics Data System (ADS)
Wittmann, Monika; Dorothea Groot Zwaaftink, Christine; Steffensen Schmidt, Louise; Guðmundsson, Sverrir; Pálsson, Finnur; Arnalds, Olafur; Björnsson, Helgi; Thorsteinsson, Throstur; Stohl, Andreas
2017-03-01
Deposition of small amounts of airborne dust on glaciers causes positive radiative forcing and enhanced melting due to the reduction of surface albedo. To study the effects of dust deposition on the mass balance of Brúarjökull, an outlet glacier of the largest ice cap in Iceland, Vatnajökull, a study of dust deposition events in the year 2012 was carried out. The dust-mobilisation module FLEXDUST was used to calculate spatio-temporally resolved dust emissions from Iceland and the dispersion model FLEXPART was used to simulate atmospheric dust dispersion and deposition. We used albedo measurements at two automatic weather stations on Brúarjökull to evaluate the dust impacts. Both stations are situated in the accumulation area of the glacier, but the lower station is close to the equilibrium line. For this site ( ˜ 1210 m a.s.l.), the dispersion model produced 10 major dust deposition events and a total annual deposition of 20.5 g m-2. At the station located higher on the glacier ( ˜ 1525 m a.s.l.), the model produced nine dust events, with one single event causing ˜ 5 g m-2 of dust deposition and a total deposition of ˜ 10 g m-2 yr-1. The main dust source was found to be the Dyngjusandur floodplain north of Vatnajökull; northerly winds prevailed 80 % of the time at the lower station when dust events occurred. In all of the simulated dust events, a corresponding albedo drop was observed at the weather stations. The influence of the dust on the albedo was estimated using the regional climate model HIRHAM5 to simulate the albedo of a clean glacier surface without dust. By comparing the measured albedo to the modelled albedo, we determine the influence of dust events on the snow albedo and the surface energy balance. We estimate that the dust deposition caused an additional 1.1 m w.e. (water equivalent) of snowmelt (or 42 % of the 2.8 m w.e. total melt) compared to a hypothetical clean glacier surface at the lower station, and 0.6 m w.e. more melt (or 38 % of the 1.6 m w.e. melt in total) at the station located further upglacier. Our findings show that dust has a strong influence on the mass balance of glaciers in Iceland.
Influence of ionotropic receptor location on their dynamics at glutamatergic synapses.
Allam, Sushmita L; Bouteiller, Jean-Marie C; Hu, Eric; Greget, Renaud; Ambert, Nicolas; Bischoff, Serge; Baudry, Michel; Berger, Theodore W
2012-01-01
In this paper we study the effects of the location of ionotropic receptors, especially AMPA and NMDA receptors, on their function at excitatory glutamatergic synapses. As few computational models only allow to evaluate the influence of receptor location on state transition and receptor dynamics, we present an elaborate computational model of a glutamatergic synapse that takes into account detailed parametric models of ionotropic receptors along with glutamate diffusion within the synaptic cleft. Our simulation results underscore the importance of the wide spread distribution of AMPA receptors which is required to avoid massive desensitization of these receptors following a single glutamate release event while NMDA receptor location is potentially optimal relative to the glutamate release site thus, emphasizing the contribution of location dependent effects of the two major ionotropic receptors to synaptic efficacy.
NASA Astrophysics Data System (ADS)
Chen, R. M.; Diggins, Z. J.; Mahatme, N. N.; Wang, L.; Zhang, E. X.; Chen, Y. P.; Zhang, H.; Liu, Y. N.; Narasimham, B.; Witulski, A. F.; Bhuva, B. L.; Fleetwood, D. M.
2017-08-01
The single-event sensitivity of bulk 40-nm sequential circuits is investigated as a function of temperature and supply voltage. An overall increase in SEU cross section versus temperature is observed at relatively high supply voltages. However, at low supply voltages, there is a threshold temperature beyond which the SEU cross section decreases with further increases in temperature. Single-event transient induced errors in flip-flops also increase versus temperature at relatively high supply voltages and are more sensitive to temperature variation than those caused by single-event upsets.
Can discrete event simulation be of use in modelling major depression?
Le Lay, Agathe; Despiegel, Nicolas; François, Clément; Duru, Gérard
2006-01-01
Background Depression is among the major contributors to worldwide disease burden and adequate modelling requires a framework designed to depict real world disease progression as well as its economic implications as closely as possible. Objectives In light of the specific characteristics associated with depression (multiple episodes at varying intervals, impact of disease history on course of illness, sociodemographic factors), our aim was to clarify to what extent "Discrete Event Simulation" (DES) models provide methodological benefits in depicting disease evolution. Methods We conducted a comprehensive review of published Markov models in depression and identified potential limits to their methodology. A model based on DES principles was developed to investigate the benefits and drawbacks of this simulation method compared with Markov modelling techniques. Results The major drawback to Markov models is that they may not be suitable to tracking patients' disease history properly, unless the analyst defines multiple health states, which may lead to intractable situations. They are also too rigid to take into consideration multiple patient-specific sociodemographic characteristics in a single model. To do so would also require defining multiple health states which would render the analysis entirely too complex. We show that DES resolve these weaknesses and that its flexibility allow patients with differing attributes to move from one event to another in sequential order while simultaneously taking into account important risk factors such as age, gender, disease history and patients attitude towards treatment, together with any disease-related events (adverse events, suicide attempt etc.). Conclusion DES modelling appears to be an accurate, flexible and comprehensive means of depicting disease progression compared with conventional simulation methodologies. Its use in analysing recurrent and chronic diseases appears particularly useful compared with Markov processes. PMID:17147790
Can discrete event simulation be of use in modelling major depression?
Le Lay, Agathe; Despiegel, Nicolas; François, Clément; Duru, Gérard
2006-12-05
Depression is among the major contributors to worldwide disease burden and adequate modelling requires a framework designed to depict real world disease progression as well as its economic implications as closely as possible. In light of the specific characteristics associated with depression (multiple episodes at varying intervals, impact of disease history on course of illness, sociodemographic factors), our aim was to clarify to what extent "Discrete Event Simulation" (DES) models provide methodological benefits in depicting disease evolution. We conducted a comprehensive review of published Markov models in depression and identified potential limits to their methodology. A model based on DES principles was developed to investigate the benefits and drawbacks of this simulation method compared with Markov modelling techniques. The major drawback to Markov models is that they may not be suitable to tracking patients' disease history properly, unless the analyst defines multiple health states, which may lead to intractable situations. They are also too rigid to take into consideration multiple patient-specific sociodemographic characteristics in a single model. To do so would also require defining multiple health states which would render the analysis entirely too complex. We show that DES resolve these weaknesses and that its flexibility allow patients with differing attributes to move from one event to another in sequential order while simultaneously taking into account important risk factors such as age, gender, disease history and patients attitude towards treatment, together with any disease-related events (adverse events, suicide attempt etc.). DES modelling appears to be an accurate, flexible and comprehensive means of depicting disease progression compared with conventional simulation methodologies. Its use in analysing recurrent and chronic diseases appears particularly useful compared with Markov processes.
Single-Event Rapid Word Collection Workshops: Efficient, Effective, Empowering
ERIC Educational Resources Information Center
Boerger, Brenda H.; Stutzman, Verna
2018-01-01
In this paper we describe single-event Rapid Word Collection (RWC) workshop results in 12 languages, and compare these results to fieldwork lexicons collected by other means. We show that this methodology of collecting words by semantic domain by community engagement leads to obtaining more words in less time than conventional collection methods.…
NASA Astrophysics Data System (ADS)
Poursartip, B.
2015-12-01
Seismic hazard assessment to predict the behavior of infrastructures subjected to earthquake relies on ground motion numerical simulation because the analytical solution of seismic waves is limited to only a few simple geometries. Recent advances in numerical methods and computer architectures make it ever more practical to reliably and quickly obtain the near-surface response to seismic events. The key motivation stems from the need to access the performance of sensitive components of the civil infrastructure (nuclear power plants, bridges, lifelines, etc), when subjected to realistic scenarios of seismic events. We discuss an integrated approach that deploys best-practice tools for simulating seismic events in arbitrarily heterogeneous formations, while also accounting for topography. Specifically, we describe an explicit forward wave solver based on a hybrid formulation that couples a single-field formulation for the computational domain with an unsplit mixed-field formulation for Perfectly-Matched-Layers (PMLs and/or M-PMLs) used to limit the computational domain. Due to the material heterogeneity and the contrasting discretization needs it imposes, an adaptive time solver is adopted. We use a Runge-Kutta-Fehlberg time-marching scheme that adjusts optimally the time step such that the local truncation error rests below a predefined tolerance. We use spectral elements for spatial discretization, and the Domain Reduction Method in accordance with double couple method to allow for the efficient prescription of the input seismic motion. Of particular interest to this development is the study of the effects idealized topographic features have on the surface motion when compared against motion results that are based on a flat-surface assumption. We discuss the components of the integrated approach we followed, and report the results of parametric studies in two and three dimensions, for various idealized topographic features, which show motion amplification that depends, as expected, on the relation between the topographic feature's characteristics and the dominant wavelength. Lastly, we report results involving three-dimensional simulations.
Numerical Simulations of the 1991 Limón Tsunami, Costa Rica Caribbean Coast
NASA Astrophysics Data System (ADS)
Chacón-Barrantes, Silvia; Zamora, Natalia
2017-08-01
The second largest recorded tsunami along the Caribbean margin of Central America occurred 25 years ago. On April 22nd, 1991, an earthquake with magnitude Mw 7.6 ruptured along the thrust faults that form the North Panamá Deformed Belt (NPDB). The earthquake triggered a tsunami that affected the Caribbean coast of Costa Rica and Panamá within few minutes, generating two casualties. These are the only deaths caused by a tsunami in Costa Rica. Coseismic uplift up to 1.6 m and runup values larger than 2 m were measured along some coastal sites. Here, we consider three solutions for the seismic source as initial conditions to model the tsunami, each considering a single rupture plane. We performed numerical modeling of the tsunami propagation and runup using NEOWAVE numerical model (Yamazaki et al. in Int J Numer Methods Fluids 67:2081-2107, 2010, doi: 10.1002/fld.2485 ) on a system of nested grids from the entire Caribbean Sea to Limón city. The modeled surface deformation and tsunami runup agreed with the measured data along most of the coastal sites with one preferred model that fits the field data. The model results are useful to determine how the 1991 tsunami could have affected regions where tsunami records were not preserved and to simulate the effects of the coastal surface deformations as buffer to tsunami. We also performed tsunami modeling to simulate the consequences if a similar event with larger magnitude Mw 7.9 occurs offshore the southern Costa Rican Caribbean coast. Such event would generate maximum wave heights of more than 5 m showing that Limón and northwestern Panamá coastal areas are exposed to moderate-to-large tsunamis. These simulations considering historical events and maximum credible scenarios can be useful for hazard assessment and also as part of studies leading to tsunami evacuation maps and mitigation plans, even when that is not the scope of this paper.
Using Adjoint Methods to Improve 3-D Velocity Models of Southern California
NASA Astrophysics Data System (ADS)
Liu, Q.; Tape, C.; Maggi, A.; Tromp, J.
2006-12-01
We use adjoint methods popular in climate and ocean dynamics to calculate Fréchet derivatives for tomographic inversions in southern California. The Fréchet derivative of an objective function χ(m), where m denotes the Earth model, may be written in the generic form δχ=int Km(x) δln m(x) d3x, where δln m=δ m/m denotes the relative model perturbation. For illustrative purposes, we construct the 3-D finite-frequency banana-doughnut kernel Km, corresponding to the misfit of a single traveltime measurement, by simultaneously computing the 'adjoint' wave field s† forward in time and reconstructing the regular wave field s backward in time. The adjoint wave field is produced by using the time-reversed velocity at the receiver as a fictitious source, while the regular wave field is reconstructed on the fly by propagating the last frame of the wave field saved by a previous forward simulation backward in time. The approach is based upon the spectral-element method, and only two simulations are needed to produce density, shear-wave, and compressional-wave sensitivity kernels. This method is applied to the SCEC southern California velocity model. Various density, shear-wave, and compressional-wave sensitivity kernels are presented for different phases in the seismograms. We also generate 'event' kernels for Pnl, S and surface waves, which are the Fréchet kernels of misfit functions that measure the P, S or surface wave traveltime residuals at all the receivers simultaneously for one particular event. Effectively, an event kernel is a sum of weighted Fréchet kernels, with weights determined by the associated traveltime anomalies. By the nature of the 3-D simulation, every event kernel is also computed based upon just two simulations, i.e., its construction costs the same amount of computation time as an individual banana-doughnut kernel. One can think of the sum of the event kernels for all available earthquakes, called the 'misfit' kernel, as a graphical representation of the gradient of the misfit function. With the capability of computing both the value of the misfit function and its gradient, which assimilates the traveltime anomalies, we are ready to use a non-linear conjugate gradient algorithm to iteratively improve velocity models of southern California.
Evaluation of commercial ADC radiation tolerance for accelerator experiments
Chen, K.; Chen, H.; Kierstead, J.; ...
2015-08-17
Electronic components used in high energy physics experiments are subjected to a radiation background composed of high energy hadrons, mesons and photons. These particles can induce permanent and transient effects that affect the normal device operation. Ionizing dose and displacement damage can cause chronic damage which disable the device permanently. Transient effects or single event effects are in general recoverable with time intervals that depend on the nature of the failure. The magnitude of these effects is technology dependent with feature size being one of the key parameters. Analog to digital converters are components that are frequently used in detectormore » front end electronics, generally placed as close as possible to the sensing elements to maximize signal fidelity. We report on radiation effects tests conducted on 17 commercially available analog to digital converters and extensive single event effect measurements on specific twelve and fourteen bit ADCs that presented high tolerance to ionizing dose. We discuss mitigation strategies for single event effects (SEE) for their use in the large hadron collider environment.« less
Elliott, Elizabeth J.; Yu, Sungduk; Kooperman, Gabriel J.; ...
2016-05-01
The sensitivities of simulated mesoscale convective systems (MCSs) in the central U.S. to microphysics and grid configuration are evaluated here in a global climate model (GCM) that also permits global-scale feedbacks and variability. Since conventional GCMs do not simulate MCSs, studying their sensitivities in a global framework useful for climate change simulations has not previously been possible. To date, MCS sensitivity experiments have relied on controlled cloud resolving model (CRM) studies with limited domains, which avoid internal variability and neglect feedbacks between local convection and larger-scale dynamics. However, recent work with superparameterized (SP) GCMs has shown that eastward propagating MCS-likemore » events are captured when embedded CRMs replace convective parameterizations. This study uses a SP version of the Community Atmosphere Model version 5 (SP-CAM5) to evaluate MCS sensitivities, applying an objective empirical orthogonal function algorithm to identify MCS-like events, and harmonizing composite storms to account for seasonal and spatial heterogeneity. A five-summer control simulation is used to assess the magnitude of internal and interannual variability relative to 10 sensitivity experiments with varied CRM parameters, including ice fall speed, one-moment and two-moment microphysics, and grid spacing. MCS sensitivities were found to be subtle with respect to internal variability, and indicate that ensembles of over 100 storms may be necessary to detect robust differences in SP-GCMs. Furthermore, these results emphasize that the properties of MCSs can vary widely across individual events, and improving their representation in global simulations with significant internal variability may require comparison to long (multidecadal) time series of observed events rather than single season field campaigns.« less
Kuss, O
2015-03-30
Meta-analyses with rare events, especially those that include studies with no event in one ('single-zero') or even both ('double-zero') treatment arms, are still a statistical challenge. In the case of double-zero studies, researchers in general delete these studies or use continuity corrections to avoid them. A number of arguments against both options has been given, and statistical methods that use the information from double-zero studies without using continuity corrections have been proposed. In this paper, we collect them and compare them by simulation. This simulation study tries to mirror real-life situations as completely as possible by deriving true underlying parameters from empirical data on actually performed meta-analyses. It is shown that for each of the commonly encountered effect estimators valid statistical methods are available that use the information from double-zero studies without using continuity corrections. Interestingly, all of them are truly random effects models, and so also the current standard method for very sparse data as recommended from the Cochrane collaboration, the Yusuf-Peto odds ratio, can be improved on. For actual analysis, we recommend to use beta-binomial regression methods to arrive at summary estimates for the odds ratio, the relative risk, or the risk difference. Methods that ignore information from double-zero studies or use continuity corrections should no longer be used. We illustrate the situation with an example where the original analysis ignores 35 double-zero studies, and a superior analysis discovers a clinically relevant advantage of off-pump surgery in coronary artery bypass grafting. Copyright © 2014 John Wiley & Sons, Ltd.
Real, Ruben G. L.; Kotchoubey, Boris; Kübler, Andrea
2014-01-01
This study aimed at evaluating the performance of the Studentized Continuous Wavelet Transform (t-CWT) as a method for the extraction and assessment of event-related brain potentials (ERP) in data from a single subject. Sensitivity, specificity, positive (PPV) and negative predictive values (NPV) of the t-CWT were assessed and compared to a variety of competing procedures using simulated EEG data at six low signal-to-noise ratios. Results show that the t-CWT combines high sensitivity and specificity with favorable PPV and NPV. Applying the t-CWT to authentic EEG data obtained from 14 healthy participants confirmed its high sensitivity. The t-CWT may thus be well suited for the assessment of weak ERPs in single-subject settings. PMID:25309308
Real, Ruben G L; Kotchoubey, Boris; Kübler, Andrea
2014-01-01
This study aimed at evaluating the performance of the Studentized Continuous Wavelet Transform (t-CWT) as a method for the extraction and assessment of event-related brain potentials (ERP) in data from a single subject. Sensitivity, specificity, positive (PPV) and negative predictive values (NPV) of the t-CWT were assessed and compared to a variety of competing procedures using simulated EEG data at six low signal-to-noise ratios. Results show that the t-CWT combines high sensitivity and specificity with favorable PPV and NPV. Applying the t-CWT to authentic EEG data obtained from 14 healthy participants confirmed its high sensitivity. The t-CWT may thus be well suited for the assessment of weak ERPs in single-subject settings.
Liang, Zheng; Li, Yajiao; Li, Peng; Jiang, Chunbo
2018-01-01
Excessive phosphorus (P) contributes to eutrophication by degrading water quality and limiting human use of water resources. Identifying economic and convenient methods to control soluble reactive phosphorus (SRP) pollution in urban runoff is the key point of rainwater management strategies. Through three series of different tests involving influencing factors, continuous operation and intermittent operation, this study explored the purification effects of bioretention tanks under different experimental conditions, it included nine intermittent tests, single field continuous test with three groups of different fillers (Fly ash mixed with sand, Blast furnace slag, and Soil), and eight intermittent tests with single filler (Blast furnace slag mixed with sand). Among the three filler combinations studied, the filler with fly ash mixed with sand achieved the best pollution reduction efficiency. The setting of the submerged zone exerted minimal influence on the P removal of the three filler combinations. An extension of the dry period slightly promoted the P purification effect. The combination of fly ash mixed with sand demonstrated a positive purification effect on SRP during short- or long-term simulated rainfall duration. Blast furnace slag also presented a positive purification effect in the short term, although its continuous purification effect on SRP was poor in the long term. The purification abilities of soil in the short and long terms were weak. Under intermittent operations across different seasons, SRP removal was unstable, and effluent concentration processes were different. The purification effect of the bioretention system on SRP was predicted through partial least squares regression (PLS) modeling analysis. The event mean concentration removal of SRP was positively related to the adsorption capacity of filler and rainfall interval time and negatively related to submerged zones, influent concentration and volume. PMID:29742120
Abuja, P M; Albertini, R; Esterbauer, H
1997-06-01
Kinetic simulation can help obtain deeper insight into the molecular mechanisms of complex processes, such as lipid peroxidation (LPO) in low-density lipoprotein (LDL). We have previously set up a single-compartment model of this process, initiating with radicals generated externally at a constant rate to show the interplay of radical scavenging and chain propagation. Here we focus on the initiating events, substituting constant rate of initiation (Ri) by redox cycling of Cu2+ and Cu+. Our simulation reveals that early events in copper-mediated LDL oxidation include (1) the reduction of Cu2+ by tocopherol (TocOH) which generates tocopheroxyl radical (TocO.), (2) the fate of TocO. which either is recycled or recombines with lipid peroxyl radical (LOO.), and (3) the reoxidation of Cu+ by lipid hydroperoxide which results in alkoxyl radical (LO.) formation. So TocO., LOO., and LO. can be regarded as primordial radicals, and the sum of their formation rates is the total rate of initiation, Ri. As experimental information of these initiating events cannot be obtained experimentally, the whole model was validated experimentally by comparison of LDL oxidation in the presence and absence of bathocuproine as predicted by simulation. Simulation predicts that Ri decreases by 2 orders of magnitude during lag time. This has important consequences for the estimation of oxidation resistance in copper-mediated LDL oxidation: after consumption of tocopherol, even small amounts of antioxidants may prolong the lag phase for a considerable time.
NASA Astrophysics Data System (ADS)
Surti, S.; Karp, J. S.
2018-03-01
The advent of silicon photomultipliers (SiPMs) has introduced the possibility of increased detector performance in commercial whole-body PET scanners. The primary advantage of these photodetectors is the ability to couple a single SiPM channel directly to a single pixel of PET scintillator that is typically 4 mm wide (one-to-one coupled detector design). We performed simulation studies to evaluate the impact of three different event positioning algorithms in such detectors: (i) a weighted energy centroid positioning (Anger logic), (ii) identifying the crystal with maximum energy deposition (1st max crystal), and (iii) identifying the crystal with the second highest energy deposition (2nd max crystal). Detector simulations performed with LSO crystals indicate reduced positioning errors when using the 2nd max crystal positioning algorithm. These studies are performed over a range of crystal cross-sections varying from 1 × 1 mm2 to 4 × 4 mm2 as well as crystal thickness of 1 cm to 3 cm. System simulations were performed for a whole-body PET scanner (85 cm ring diameter) with a long axial FOV (70 cm long) and show an improvement in reconstructed spatial resolution for a point source when using the 2nd max crystal positioning algorithm. Finally, we observe a 30-40% gain in contrast recovery coefficient values for 1 and 0.5 cm diameter spheres when using the 2nd max crystal positioning algorithm compared to the 1st max crystal positioning algorithm. These results show that there is an advantage to implementing the 2nd max crystal positioning algorithm in a new generation of PET scanners using one-to-one coupled detector design with lutetium based crystals, including LSO, LYSO or scintillators that have similar density and effective atomic number as LSO.
Green, Linda E; Dinh, Tuan A; Hinds, David A; Walser, Bryan L; Allman, Richard
2014-04-01
Tamoxifen therapy reduces the risk of breast cancer but increases the risk of serious adverse events including endometrial cancer and thromboembolic events. The cost effectiveness of using a commercially available breast cancer risk assessment test (BREVAGen™) to inform the decision of which women should undergo chemoprevention by tamoxifen was modeled in a simulated population of women who had undergone biopsies but had no diagnosis of cancer. A continuous time, discrete event, mathematical model was used to simulate a population of white women aged 40-69 years, who were at elevated risk for breast cancer because of a history of benign breast biopsy. Women were assessed for clinical risk of breast cancer using the Gail model and for genetic risk using a panel of seven common single nucleotide polymorphisms. We evaluated the cost effectiveness of using genetic risk together with clinical risk, instead of clinical risk alone, to determine eligibility for 5 years of tamoxifen therapy. In addition to breast cancer, the simulation included health states of endometrial cancer, pulmonary embolism, deep-vein thrombosis, stroke, and cataract. Estimates of costs in 2012 US dollars were based on Medicare reimbursement rates reported in the literature and utilities for modeled health states were calculated as an average of utilities reported in the literature. A 50-year time horizon was used to observe lifetime effects including survival benefits. For those women at intermediate risk of developing breast cancer (1.2-1.66 % 5-year risk), the incremental cost-effectiveness ratio for the combined genetic and clinical risk assessment strategy over the clinical risk assessment-only strategy was US$47,000, US$44,000, and US$65,000 per quality-adjusted life-year gained, for women aged 40-49, 50-59, and 60-69 years, respectively (assuming a price of US$945 for genetic testing). Results were sensitive to assumptions about patient adherence, utility of life while taking tamoxifen, and cost of genetic testing. From the US payer's perspective, the combined genetic and clinical risk assessment strategy may be a moderately cost-effective alternative to using clinical risk alone to guide chemoprevention recommendations for women at intermediate risk of developing breast cancer.
The solar energetic particle propagation of solar flare events on 24th solar cycle.
NASA Astrophysics Data System (ADS)
Paluk, P.; Khumlumlert, T.; Kanlayaprasit, N.; Aiemsa-ad, N.
2017-09-01
Now the Sun is in the 24th solar cycle. The peak of solar cycle correspond to the number of the Sun activities, which one of them is solar flare. The solar flare is the violent explosion at the solar atmosphere and releases the high energy ion from the Sun to the interplanetary medium. Solar energetic particles or solar cosmic ray have important effect on the Earth, such as disrupt radio communication. We analyze the particle transport of the solar flare events on August 9, 2011, January 27, 2012, and November 3, 2013 in 24th solar cycle. The particle data for each solar flare was obtained from SIS instrument on ACE spacecraft. We simulate the particle transport with the equation of Ruffolo 1995, 1998. We solve the transport equation with the numerical technique of finite different. We find the injection duration from the Sun to the Earth by the compared fitting method of piecewise linear function between the simulation results and particle data from spacecraft. The position of these solar flare events are on the west side of the Sun, which are N18W68, N33W85, and S12W16. We found that mean free path is roughly constant for a single event. This implies that the interplanetary scattering is approximately energy independent, but the level of scattering varies with time. The injection duration decreases with increasing energy. We found the resultant variation of the highest energy and lowest energy, because the effect of space environments and the number of the detected data was small. The high mean free path of the high energy particles showed the transport capability of particles along to the variable magnetic field line. The violent explosion of these solar flares didn’t affect on the Earth magnetic field with Kp-index less than 3.
NASA Technical Reports Server (NTRS)
Horst, Richard L.; Mahaffey, David L.; Munson, Robert C.
1989-01-01
The present Phase 2 small business innovation research study was designed to address issues related to scalp-recorded event-related potential (ERP) indices of mental workload and to transition this technology from the laboratory to cockpit simulator environments for use as a systems engineering tool. The project involved five main tasks: (1) Two laboratory studies confirmed the generality of the ERP indices of workload obtained in the Phase 1 study and revealed two additional ERP components related to workload. (2) A task analysis' of flight scenarios and pilot tasks in the Advanced Concepts Flight Simulator (ACFS) defined cockpit events (i.e., displays, messages, alarms) that would be expected to elicit ERPs related to workload. (3) Software was developed to support ERP data analysis. An existing ARD-proprietary package of ERP data analysis routines was upgraded, new graphics routines were developed to enhance interactive data analysis, and routines were developed to compare alternative single-trial analysis techniques using simulated ERP data. (4) Working in conjunction with NASA Langley research scientists and simulator engineers, preparations were made for an ACFS validation study of ERP measures of workload. (5) A design specification was developed for a general purpose, computerized, workload assessment system that can function in simulators such as the ACFS.
Revealing the Effects of Nanoscale Membrane Curvature on Lipid Mobility
Kabbani, Abir Maarouf; Woodward, Xinxin
2017-01-01
Recent advances in nanoengineering and super-resolution microscopy have enabled new capabilities for creating and observing membrane curvature. However, the effects of curvature on single-lipid diffusion have yet to be revealed. The simulations presented here describe the capabilities of varying experimental methods for revealing the effects of nanoscale curvature on single-molecule mobility. Traditionally, lipid mobility is revealed through fluorescence recovery after photobleaching (FRAP), fluorescence correlation spectroscopy (FCS), and single particle tracking (SPT). However, these techniques vary greatly in their ability to detect the effects of nanoscale curvature on lipid behavior. Traditionally, FRAP and FCS depend on diffraction-limited illumination and detection. A simulation of FRAP shows minimal effects on lipids diffusion due to a 50 nm radius membrane bud. Throughout the stages of the budding process, FRAP detected minimal changes in lipid recovery time due to the curvature versus flat membrane. Simulated FCS demonstrated small effects due to a 50 nm radius membrane bud that was more apparent with curvature-dependent lipid mobility changes. However, SPT achieves a sub-diffraction-limited resolution of membrane budding and lipid mobility through the identification of the single-lipid positions with ≤15 nm spatial and ≤20 ms temporal resolution. By mapping the single-lipid step lengths to locations on the membrane, the effects of membrane topography and curvature could be correlated to the effective membrane viscosity. Single-fluorophore localization techniques, such SPT, can detect membrane curvature and its effects on lipid behavior. These simulations and discussion provide a guideline for optimizing the experimental procedures in revealing the effects of curvature on lipid mobility and effective local membrane viscosity. PMID:29057801
2010-01-01
gross vehicle response; and the effects of blast mitigation material, restraint system, and seat design to the loads developed on the members of an...occupant. A Blast Event Simulation sysTem (BEST) has been developed for facilitating the easy use of the LS- DYNA solvers for conducting a...et al, 1999] for modeling blast events. In this paper the Eulerian solver of LS- DYNA is employed for simulating the soil – explosive – air
Cheng, Ji; Pullenayegum, Eleanor; Marshall, John K; Thabane, Lehana
2016-01-01
Objectives There is no consensus on whether studies with no observed events in the treatment and control arms, the so-called both-armed zero-event studies, should be included in a meta-analysis of randomised controlled trials (RCTs). Current analytic approaches handled them differently depending on the choice of effect measures and authors' discretion. Our objective is to evaluate the impact of including or excluding both-armed zero-event (BA0E) studies in meta-analysis of RCTs with rare outcome events through a simulation study. Method We simulated 2500 data sets for different scenarios varying the parameters of baseline event rate, treatment effect and number of patients in each trial, and between-study variance. We evaluated the performance of commonly used pooling methods in classical meta-analysis—namely, Peto, Mantel-Haenszel with fixed-effects and random-effects models, and inverse variance method with fixed-effects and random-effects models—using bias, root mean square error, length of 95% CI and coverage. Results The overall performance of the approaches of including or excluding BA0E studies in meta-analysis varied according to the magnitude of true treatment effect. Including BA0E studies introduced very little bias, decreased mean square error, narrowed the 95% CI and increased the coverage when no true treatment effect existed. However, when a true treatment effect existed, the estimates from the approach of excluding BA0E studies led to smaller bias than including them. Among all evaluated methods, the Peto method excluding BA0E studies gave the least biased results when a true treatment effect existed. Conclusions We recommend including BA0E studies when treatment effects are unlikely, but excluding them when there is a decisive treatment effect. Providing results of including and excluding BA0E studies to assess the robustness of the pooled estimated effect is a sensible way to communicate the results of a meta-analysis when the treatment effects are unclear. PMID:27531725
Radiation induced leakage due to stochastic charge trapping in isolation layers of nanoscale MOSFETs
NASA Astrophysics Data System (ADS)
Zebrev, G. I.; Gorbunov, M. S.; Pershenkov, V. S.
2008-03-01
The sensitivity of sub-100 nm devices to microdose effects, which can be considered as intermediate case between cumulative total dose and single event errors, is investigated. A detailed study of radiation-induced leakage due to stochastic charge trapping in irradiated planar and nonplanar devices is developed. The influence of High-K insulators on nanoscale ICs reliability is discussed. Low critical values of trapped charge demonstrate a high sensitivity to single event effect.
Update on parts SEE suspectibility from heavy ions. [Single Event Effects
NASA Technical Reports Server (NTRS)
Nichols, D. K.; Smith, L. S.; Schwartz, H. R.; Soli, G.; Watson, K.; Koga, R.; Crain, W. R.; Crawford, K. B.; Hansel, S. J.; Lau, D. D.
1991-01-01
JPL and the Aerospace Corporation have collected a fourth set of heavy ion single event effects (SEE) test data. Trends in SEE susceptibility (including soft errors and latchup) for state-of-the-art parts are displayed. All data are conveniently divided into two tables: one for MOS devices, and one for a shorter list of recently tested bipolar devices. In addition, a new table of data for latchup tests only (invariably CMOS processes) is given.
NASA Astrophysics Data System (ADS)
Liang, Guoying; Shen, Jie; Zhang, Jie; Zhong, Haowen; Cui, Xiaojun; Yan, Sha; Zhang, Xiaofu; Yu, Xiao; Le, Xiaoyun
2017-10-01
Improving antifatigue performance of silicon substrate is very important for the development of semiconductor industry. The cracking behavior of silicon under intense pulsed ion beam irradiation was studied by numerical simulation in order to understand the mechanism of induced surface peeling observed by experimental means. Using molecular dynamics simulation based on Stillinger Weber potential, tensile effect on crack growth and propagation in single crystal silicon was investigated. Simulation results reveal that stress-strain curves of single crystal silicon at a constant strain rate can be divided into three stages, which are not similar to metal stress-strain curves; different tensile load velocities induce difference of single silicon crack formation speed; the layered stress results in crack formation in single crystal silicon. It is concluded that the crack growth and propagation is more sensitive to strain rate, tensile load velocity, stress distribution in single crystal silicon.
Ndindjock, Roger; Gedeon, Jude; Mendis, Shanthi; Paccaud, Fred; Bovet, Pascal
2011-04-01
To assess the prevalence of cardiovascular (CV) risk factors in Seychelles, a middle-income African country, and compare the cost-effectiveness of single-risk-factor management (treating individuals with arterial blood pressure ≥ 140/90 mmHg and/or total serum cholesterol ≥ 6.2 mmol/l) with that of management based on total CV risk (treating individuals with a total CV risk ≥ 10% or ≥ 20%). CV risk factor prevalence and a CV risk prediction chart for Africa were used to estimate the 10-year risk of suffering a fatal or non-fatal CV event among individuals aged 40-64 years. These figures were used to compare single-risk-factor management with total risk management in terms of the number of people requiring treatment to avert one CV event and the number of events potentially averted over 10 years. Treatment for patients with high total CV risk (≥ 20%) was assumed to consist of a fixed-dose combination of several drugs (polypill). Cost analyses were limited to medication. A total CV risk of ≥ 10% and ≥ 20% was found among 10.8% and 5.1% of individuals, respectively. With single-risk-factor management, 60% of adults would need to be treated and 157 cardiovascular events per 100000 population would be averted per year, as opposed to 5% of adults and 92 events with total CV risk management. Management based on high total CV risk optimizes the balance between the number requiring treatment and the number of CV events averted. Total CV risk management is much more cost-effective than single-risk-factor management. These findings are relevant for all countries, but especially for those economically and demographically similar to Seychelles.
Modeling and experimental verification of single event upsets
NASA Technical Reports Server (NTRS)
Fogarty, T. N.; Attia, J. O.; Kumar, A. A.; Tang, T. S.; Lindner, J. S.
1993-01-01
The research performed and the results obtained at the Laboratory for Radiation Studies, Prairie View A&M University and Texas A&I University, on the problem of Single Events Upsets, the various schemes employed to limit them and the effects they have on the reliability and fault tolerance at the systems level, such as robotic systems are reviewed.
ERIC Educational Resources Information Center
Elhai, Jon D.; Engdahl, Ryan M.; Palmieri, Patrick A.; Naifeh, James A.; Schweinle, Amy; Jacobs, Gerard A.
2009-01-01
The authors examined the effects of a methodological manipulation on the Posttraumatic Stress Disorder (PTSD) Checklist's factor structure: specifically, whether respondents were instructed to reference a single worst traumatic event when rating PTSD symptoms. Nonclinical, trauma-exposed participants were randomly assigned to 1 of 2 PTSD…
Isoform-level gene expression patterns in single-cell RNA-sequencing data.
Vu, Trung Nghia; Wills, Quin F; Kalari, Krishna R; Niu, Nifang; Wang, Liewei; Pawitan, Yudi; Rantalainen, Mattias
2018-02-27
RNA sequencing of single cells enables characterization of transcriptional heterogeneity in seemingly homogeneous cell populations. Single-cell sequencing has been applied in a wide range of researches fields. However, few studies have focus on characterization of isoform-level expression patterns at the single-cell level. In this study we propose and apply a novel method, ISOform-Patterns (ISOP), based on mixture modeling, to characterize the expression patterns of isoform pairs from the same gene in single-cell isoform-level expression data. We define six principal patterns of isoform expression relationships and describe a method for differential-pattern analysis. We demonstrate ISOP through analysis of single-cell RNA-sequencing data from a breast cancer cell line, with replication in three independent datasets. We assigned the pattern types to each of 16,562 isoform-pairs from 4,929 genes. Among those, 26% of the discovered patterns were significant (p<0.05), while remaining patterns are possibly effects of transcriptional bursting, drop-out and stochastic biological heterogeneity. Furthermore, 32% of genes discovered through differential-pattern analysis were not detected by differential-expression analysis. The effect of drop-out events, mean expression level, and properties of the expression distribution on the performances of ISOP were also investigated through simulated datasets. To conclude, ISOP provides a novel approach for characterization of isoformlevel preference, commitment and heterogeneity in single-cell RNA-sequencing data. The ISOP method has been implemented as a R package and is available at https://github.com/nghiavtr/ISOP under a GPL-3 license. mattias.rantalainen@ki.se. Supplementary data are available at Bioinformatics online.
Discrete event simulation tool for analysis of qualitative models of continuous processing systems
NASA Technical Reports Server (NTRS)
Malin, Jane T. (Inventor); Basham, Bryan D. (Inventor); Harris, Richard A. (Inventor)
1990-01-01
An artificial intelligence design and qualitative modeling tool is disclosed for creating computer models and simulating continuous activities, functions, and/or behavior using developed discrete event techniques. Conveniently, the tool is organized in four modules: library design module, model construction module, simulation module, and experimentation and analysis. The library design module supports the building of library knowledge including component classes and elements pertinent to a particular domain of continuous activities, functions, and behavior being modeled. The continuous behavior is defined discretely with respect to invocation statements, effect statements, and time delays. The functionality of the components is defined in terms of variable cluster instances, independent processes, and modes, further defined in terms of mode transition processes and mode dependent processes. Model construction utilizes the hierarchy of libraries and connects them with appropriate relations. The simulation executes a specialized initialization routine and executes events in a manner that includes selective inherency of characteristics through a time and event schema until the event queue in the simulator is emptied. The experimentation and analysis module supports analysis through the generation of appropriate log files and graphics developments and includes the ability of log file comparisons.
McCutcheon, Vivia V; Heath, Andrew C; Nelson, Elliot C; Bucholz, Kathleen K; Madden, Pamela A F; Martin, Nicholas G
2010-02-01
Individuals who experience one type of trauma often experience other types, yet few studies have examined the clustering of trauma. This study examines the clustering of traumatic events and associations of trauma with risk for single and co-occurring major depressive disorder (MDD) and panic attack for 20 years after first trauma. Lifetime histories of MDD, panic attack, and traumatic events were obtained from participants in an Australian twin sample. Latent class analysis was used to derive trauma classes based on each respondent's trauma history. Associations of the resulting classes and of parental alcohol problems and familial effects with risk for a first onset of single and co-occurring MDD and panic attack were examined from the year of first trauma to 20 years later. Traumatic events clustered into three distinct classes characterized by endorsement of little or no trauma, primarily nonassaultive, and primarily assaultive events. Individuals in the assaultive class were characterized by a younger age at first trauma, a greater number of traumatic events, and high rates of parental alcohol problems. Members of the assaultive trauma class had the strongest and most enduring risk for single and co-occurring lifetime MDD and panic attack. Assaultive trauma outweighed associations of familial effects and nonassaultive trauma with risk for 10 years following first trauma.
Semb, Olof; Strömsten, Lotta M J; Sundbom, Elisabet; Fransson, Per; Henningsson, Mikael
2011-08-01
To increase understanding of post-victimization symptom development, the present study investigated the role of shame- and guilt-proneness and event-related shame and guilt as potential risk factors. 35 individuals (M age = 31.7 yr.; 48.5% women), recently victimized by a single event of severe violent crime, were assessed regarding shame- and guilt-proneness, event-related shame and guilt, and post-victimization symptoms. The mediating role of event-related shame was investigated with structural equation modeling (SEM), using bootstrapping. The guilt measures were unrelated to each other and to post-victimization symptoms. The shame measures were highly intercorrelated and were both positively correlated to more severe post-victimization symptom levels. Event-related shame as mediator between shame-proneness and post-victimization symptoms was demonstrated by prevalent significant indirect effects. Both shame measures are potent risk factors for distress after victimization, whereby part of the effect of shame-proneness on post-victimization symptoms is explained by event-related shame.
NASA Astrophysics Data System (ADS)
Franzoni, G.; Norkus, A.; Pol, A. A.; Srimanobhas, N.; Walker, J.
2017-10-01
Physics analysis at the Compact Muon Solenoid requires both the production of simulated events and processing of the data collected by the experiment. Since the end of the LHC Run-I in 2012, CMS has produced over 20 billion simulated events, from 75 thousand processing requests organised in one hundred different campaigns. These campaigns emulate different configurations of collision events, the detector, and LHC running conditions. In the same time span, sixteen data processing campaigns have taken place to reconstruct different portions of the Run-I and Run-II data with ever improving algorithms and calibrations. The scale and complexity of the events simulation and processing, and the requirement that multiple campaigns must proceed in parallel, demand that a comprehensive, frequently updated and easily accessible monitoring be made available. The monitoring must serve both the analysts, who want to know which and when datasets will become available, and the central production teams in charge of submitting, prioritizing, and running the requests across the distributed computing infrastructure. The Production Monitoring Platform (pMp) web-based service, has been developed in 2015 to address those needs. It aggregates information from multiple services used to define, organize, and run the processing requests. Information is updated hourly using a dedicated elastic database and the monitoring provides multiple configurable views to assess the status of single datasets as well as entire production campaigns. This contribution will describe the pMp development, the evolution of its functionalities, and one and half year of operational experience.
Radiation Effects on DC-DC Converters
NASA Technical Reports Server (NTRS)
Zhang, De-Xin; AbdulMazid, M. D.; Attia, John O.; Kankam, Mark D. (Technical Monitor)
2001-01-01
In this work, several DC-DC converters were designed and built. The converters are Buck Buck-Boost, Cuk, Flyback, and full-bridge zero-voltage switched. The total ionizing dose radiation and single event effects on the converters were investigated. The experimental results for the TID effects tests show that the voltages of the Buck Buck-Boost, Cuk, and Flyback converters increase as total dose increased when using power MOSFET IRF250 as a switching transistor. The change in output voltage with total dose is highest for the Buck converter and the lowest for Flyback converter. The trend of increase in output voltages with total dose in the present work agrees with those of the literature. The trends of the experimental results also agree with those obtained from PSPICE simulation. For the full-bridge zero-voltage switch converter, it was observed that the dc-dc converter with IRF250 power MOSFET did not show a significant change of output voltage with total dose. In addition, for the dc-dc converter with FSF254R4 radiation-hardened power MOSFET, the output voltage did not change significantly with total dose. The experimental results were confirmed by PSPICE simulation that showed that FB-ZVS converter with IRF250 power MOSFET's was not affected with the increase in total ionizing dose. Single Event Effects (SEE) radiation tests were performed on FB-ZVS converters. It was observed that the FB-ZVS converter with the IRF250 power MOSFET, when the device was irradiated with Krypton ion with ion-energy of 150 MeV and LET of 41.3 MeV-square cm/mg, the output voltage increased with the increase in fluence. However, for Krypton with ion-energy of 600 MeV and LET of 33.65 MeV-square cm/mg, and two out of four transistors of the converter were permanently damaged. The dc-dc converter with FSF254R4 radiation hardened power MOSFET's did not show significant change at the output voltage with fluence while being irradiated by Krypton with ion energy of 1.20 GeV and LET of 25.97 MeV-square cm/mg. This might be due to fact that the device is radiation hardened.
Multistage Monte Carlo simulation of jet modification in a static medium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, S.; Park, C.; Barbieri, R. A.
In this work, the modification of hard jets in an extended static medium held at a fixed temperature is studied using three different Monte Carlo event generators: linear Boltzmann transport (LBT), modular all twist transverse-scattering elastic-drag and radiation (MATTER), and modular algorithm for relativistic treatment of heavy-ion interactions (MARTINI). Each event generator contains a different set of assumptions regarding the energy and virtuality of the partons within a jet versus the energy scale of the medium and, hence, applies to a different epoch in the space-time history of the jet evolution. Here modeling is developed where a jet may sequentiallymore » transition from one generator to the next, on a parton-by-parton level, providing a detailed simulation of the space-time evolution of medium modified jets over a much broader dynamic range than has been attempted previously in a single calculation. Comparisons are carried out for different observables sensitive to jet quenching, including the parton fragmentation function and the azimuthal distribution of jet energy around the jet axis. The effect of varying the boundary between different generators is studied and a theoretically motivated criterion for the location of this boundary is proposed. Lastly, the importance of such an approach with coupled generators to the modeling of jet quenching is discussed.« less
Multistage Monte Carlo simulation of jet modification in a static medium
Cao, S.; Park, C.; Barbieri, R. A.; ...
2017-08-22
In this work, the modification of hard jets in an extended static medium held at a fixed temperature is studied using three different Monte Carlo event generators: linear Boltzmann transport (LBT), modular all twist transverse-scattering elastic-drag and radiation (MATTER), and modular algorithm for relativistic treatment of heavy-ion interactions (MARTINI). Each event generator contains a different set of assumptions regarding the energy and virtuality of the partons within a jet versus the energy scale of the medium and, hence, applies to a different epoch in the space-time history of the jet evolution. Here modeling is developed where a jet may sequentiallymore » transition from one generator to the next, on a parton-by-parton level, providing a detailed simulation of the space-time evolution of medium modified jets over a much broader dynamic range than has been attempted previously in a single calculation. Comparisons are carried out for different observables sensitive to jet quenching, including the parton fragmentation function and the azimuthal distribution of jet energy around the jet axis. The effect of varying the boundary between different generators is studied and a theoretically motivated criterion for the location of this boundary is proposed. Lastly, the importance of such an approach with coupled generators to the modeling of jet quenching is discussed.« less
Drop impact into a deep pool: vortex shedding and jet formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agbaglah, G.; Thoraval, M. -J.; Thoroddsen, S. T.
2015-02-01
One of the simplest splashing scenarios results from the impact of a single drop on a deep pool. The traditional understanding of this process is that the impact generates an axisymmetric sheet-like jet that later breaks up into secondary droplets. Recently it was shown that even this simplest of scenarios is more complicated than expected because multiple jets can be generated from a single impact event and there are transitions in the multiplicity of jets as the experimental parameters are varied. Here, we use experiments and numerical simulations of a single drop impacting on a deep pool to examine themore » transition from impacts that produce a single jet to those that produce two jets. Using high-speed X-ray imaging methods we show that vortex separation within the drop leads to the formation of a second jet long after the formation of the ejecta sheet. Using numerical simulations we develop a phase diagram for this transition and show that the capillary number is the most appropriate order parameter for the transition.« less
Reliability Design for Neutron Induced Single-Event Burnout of IGBT
NASA Astrophysics Data System (ADS)
Shoji, Tomoyuki; Nishida, Shuichi; Ohnishi, Toyokazu; Fujikawa, Touma; Nose, Noboru; Hamada, Kimimori; Ishiko, Masayasu
Single-event burnout (SEB) caused by cosmic ray neutrons leads to catastrophic failures in insulated gate bipolar transistors (IGBTs). It was found experimentally that the incident neutron induced SEB failure rate increases as a function of the applied collector voltage. Moreover, the failure rate increased sharply with an increase in the applied collector voltage when the voltage exceeded a certain threshold value (SEB cutoff voltage). In this paper, transient device simulation results indicate that impact ionization at the n-drift/n+ buffer boundary is a crucially important factor in the turning-on of the parasitic pnp transistor, and eventually latch-up of the parasitic thyristor causes SEB. In addition, the device parameter dependency of the SEB cutoff voltage was analytically derived from the latch-up condition of the parasitic thyristor. As a result, it was confirmed that reducing the current gain of the parasitic transistor, such as by increasing the n-drift region thickness d was effective in increasing the SEB cutoff voltage. Furthermore, `white' neutron-irradiation experiments demonstrated that suppressing the inherent parasitic thyristor action leads to an improvement of the SEB cutoff voltage. It was confirmed that current gain optimization of the parasitic transistor is a crucial factor for establishing highly reliable design against chance failures.
A path-level exact parallelization strategy for sequential simulation
NASA Astrophysics Data System (ADS)
Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.
2018-01-01
Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.
NASA Technical Reports Server (NTRS)
Jones, C. W. (Editor)
1985-01-01
Basic mechanisms of radiation effects in structures and materials are discussed, taking into account the time dependence of interface state production, process dependent build-up of interface states in irradiated N-channel MOSFETs, bias annealing of radiation and bias induced positive charges in n- and p-type MOS capacitors, hole removal in thin-gate MOSFETs by tunneling, and activation energies of oxide charge recovery in SOS or SOI structures after an ionizing pulse. Other topics investigated are related to radiation effects in devices, radiation effects in integrated circuits, spacecraft charging and space radiation effects, single-event phenomena, hardness assurance and radiation sources, SGEMP/IEMP phenomena, EMP phenomena, and dosimetry and energy-dependent effects. Attention is given to a model of the plasma wake generated by a large object, gate charge collection and induced drain current in GaAs FETs, simulation of charge collection in a multilayer device, and time dependent dose enhancement effects on integrated circuit transient response mechanisms.
NASA Astrophysics Data System (ADS)
Jones, C. W.
1985-12-01
Basic mechanisms of radiation effects in structures and materials are discussed, taking into account the time dependence of interface state production, process dependent build-up of interface states in irradiated N-channel MOSFETs, bias annealing of radiation and bias induced positive charges in n- and p-type MOS capacitors, hole removal in thin-gate MOSFETs by tunneling, and activation energies of oxide charge recovery in SOS or SOI structures after an ionizing pulse. Other topics investigated are related to radiation effects in devices, radiation effects in integrated circuits, spacecraft charging and space radiation effects, single-event phenomena, hardness assurance and radiation sources, SGEMP/IEMP phenomena, EMP phenomena, and dosimetry and energy-dependent effects. Attention is given to a model of the plasma wake generated by a large object, gate charge collection and induced drain current in GaAs FETs, simulation of charge collection in a multilayer device, and time dependent dose enhancement effects on integrated circuit transient response mechanisms.
Simulating spontaneous aseismic and seismic slip events on evolving faults
NASA Astrophysics Data System (ADS)
Herrendörfer, Robert; van Dinther, Ylona; Pranger, Casper; Gerya, Taras
2017-04-01
Plate motion along tectonic boundaries is accommodated by different slip modes: steady creep, seismic slip and slow slip transients. Due to mainly indirect observations and difficulties to scale results from laboratory experiments to nature, it remains enigmatic which fault conditions favour certain slip modes. Therefore, we are developing a numerical modelling approach that is capable of simulating different slip modes together with the long-term fault evolution in a large-scale tectonic setting. We extend the 2D, continuum mechanics-based, visco-elasto-plastic thermo-mechanical model that was designed to simulate slip transients in large-scale geodynamic simulations (van Dinther et al., JGR, 2013). We improve the numerical approach to accurately treat the non-linear problem of plasticity (see also EGU 2017 abstract by Pranger et al.). To resolve a wide slip rate spectrum on evolving faults, we develop an invariant reformulation of the conventional rate-and-state dependent friction (RSF) and adapt the time step (Lapusta et al., JGR, 2000). A crucial part of this development is a conceptual ductile fault zone model that relates slip rates along discrete planes to the effective macroscopic plastic strain rates in the continuum. We test our implementation first in a simple 2D setup with a single fault zone that has a predefined initial thickness. Results show that deformation localizes in case of steady creep and for very slow slip transients to a bell-shaped strain rate profile across the fault zone, which suggests that a length scale across the fault zone may exist. This continuum length scale would overcome the common mesh-dependency in plasticity simulations and question the conventional treatment of aseismic slip on infinitely thin fault zones. We test the introduction of a diffusion term (similar to the damage description in Lyakhovsky et al., JMPS, 2011) into the state evolution equation and its effect on (de-)localization during faster slip events. We compare the slip spectrum in our simulations to conventional RSF simulations (Liu and Rice, JGR, 2007). We further demonstrate the capability of simulating the evolution of a fault zone and simultaneous occurrence of slip transients. From small random initial distributions of the state variable in an otherwise homogeneous medium, deformation localizes and forms curved zones of reduced states. These spontaneously formed fault zones host slip transients, which in turn contribute to the growth of the fault zone.
Probabilistic modelling of flood events using the entropy copula
NASA Astrophysics Data System (ADS)
Li, Fan; Zheng, Qian
2016-11-01
The estimation of flood frequency is vital for the flood control strategies and hydraulic structure design. Generating synthetic flood events according to statistical properties of observations is one of plausible methods to analyze the flood frequency. Due to the statistical dependence among the flood event variables (i.e. the flood peak, volume and duration), a multidimensional joint probability estimation is required. Recently, the copula method is widely used for multivariable dependent structure construction, however, the copula family should be chosen before application and the choice process is sometimes rather subjective. The entropy copula, a new copula family, employed in this research proposed a way to avoid the relatively subjective process by combining the theories of copula and entropy. The analysis shows the effectiveness of the entropy copula for probabilistic modelling the flood events of two hydrological gauges, and a comparison of accuracy with the popular copulas was made. The Gibbs sampling technique was applied for trivariate flood events simulation in order to mitigate the calculation difficulties of extending to three dimension directly. The simulation results indicate that the entropy copula is a simple and effective copula family for trivariate flood simulation.
Radiation Damage and Single Event Effect Results for Candidate Spacecraft Electronics
NASA Technical Reports Server (NTRS)
OBryan, Martha V.; LaBel, Kenneth A.; Reed, Robert A.; Howard, James W., Jr.; Ladbury, Ray L.; Barth, Janet L.; Kniffin, Scott D.; Seidleck, Christina M.; Marshall, Paul W.; Marshall, Cheryl J.;
2000-01-01
We present data on the vulnerability of a variety of candidate spacecraft electronics to proton and heavy-ion induced single-event effects and proton-induced damage. We also present data on the susceptibility of parts to functional degradation resulting from total ionizing dose at low dose rates (0.003-0.33 Rads(Si)/s). Devices tested include optoelectronics, digital, analog, linear bipolar, hybrid devices, Analog to Digital Converters (ADCs), Digital to Analog Converters (DACs), and DC-DC converters, among others.
Proton Single Event Effects (SEE) Testing of the Myrinet Crossbar Switch and Network Interface Card
NASA Technical Reports Server (NTRS)
Howard, James W., Jr.; LaBel, Kenneth A.; Carts, Martin A.; Stattel, Ronald; Irwin, Timothy L.; Day, John H. (Technical Monitor)
2002-01-01
As part of the Remote Exploration and Experimentation Project (REE), work was performed to do a proton SEE (Single Event Effect) evaluation of the Myricom network protocol system (Myrinet). This testing included the evaluation of the Myrinet crossbar switch and the Network Interface Card (NIC). To this end, two crossbar switch devices and five components in the NIC were exposed to the proton beam at the University of California at Davis Crocker Nuclear Laboratory (CNL).
NASA Technical Reports Server (NTRS)
Steinman, Jeffrey S. (Inventor)
1998-01-01
The present invention is embodied in a method of performing object-oriented simulation and a system having inter-connected processor nodes operating in parallel to simulate mutual interactions of a set of discrete simulation objects distributed among the nodes as a sequence of discrete events changing state variables of respective simulation objects so as to generate new event-defining messages addressed to respective ones of the nodes. The object-oriented simulation is performed at each one of the nodes by assigning passive self-contained simulation objects to each one of the nodes, responding to messages received at one node by generating corresponding active event objects having user-defined inherent capabilities and individual time stamps and corresponding to respective events affecting one of the passive self-contained simulation objects of the one node, restricting the respective passive self-contained simulation objects to only providing and receiving information from die respective active event objects, requesting information and changing variables within a passive self-contained simulation object by the active event object, and producing corresponding messages specifying events resulting therefrom by the active event objects.
ERIC Educational Resources Information Center
Kimemia, Judy
2017-01-01
Purpose: The purpose of this project was to compare web-based to high-fidelity simulation training in the management of high risk/low occurrence anesthesia related events, to enhance knowledge acquisition for Certified Registered Nurse Anesthetists (CRNAs). This project was designed to answer the question: Is web-based training as effective as…
NASA Astrophysics Data System (ADS)
Lapusta, N.; Thomas, M.; Noda, H.; Avouac, J.
2012-12-01
Long-term simulations that incorporate both seismic events and aseismic slip are quite important for studies of earthquake physics but challenging computationally. To study long deformation histories, most simulation methods do not incorporate full inertial effects (wave propagation) during simulated earthquakes, using quasi-dynamic approximations instead. Here we compare the results of quasi-dynamic simulations to the fully dynamic ones for a range of problems to determine the applicability of the quasi-dynamic approach. Intuitively, the quasi-dynamic approach should do relatively well in problems where wave-mediated effects are relatively simple but should have substantially different (and hence wrong) response when the wave-mediated stress transfers dominate the character of the seismic events. This is exactly what we observe in our simulations. We consider a 2D model of a rate-and-state fault with a seismogenic (steady-state velocity-weakening) zone surrounded by creeping (steady-state velocity-strengthening) areas. If the seismogenic zone is described by the standard Dieterich-Ruina rate-and-state friction, the resulting earthquake sequences consist of relatively simple crack-like ruptures, and the inclusion of true wave-propagation effects mostly serves to concentrate stress more efficiently at the rupture front. Hence, in such models, rupture speeds and slip rates are significantly (several times) lower in the quasi-dynamic simulations compared to the fully dynamic ones, but the total slip, the crack-like nature of seismic events, and the overall pattern of earthquake sequences is comparable, consistently with prior studies. Such behavior can be classified as qualitatively similar but quantitatively different, and it motivates the popularity of the quasi-dynamic methods in simulations. However, the comparison changes dramatically once we consider a model with enhanced dynamic weakening in the seismogenic zone in the form of flash heating. In this case, the fully dynamic simulations produce seismic ruptures in the form of short-duration slip pulses, where the pulses form due to a combination of enhanced weakening and wave effects. The quasi-dynamic simulations in the same model produce completely different results, with large crack-like ruptures, different total slips, different rupture patterns, and different prestress state before large, model-spanning events. Such qualitative differences between the quasi-dynamic and fully-dynamic simulation should result in any model where inertial effects lead to qualitative differences, such as cases with supershear transition or fault with different materials on the two sides. We will present results on our current work on how the quasi-dynamic and fully dynamic simulations compare for the cases with heterogeneous fault properties.
Network simulation using the simulation language for alternate modeling (SLAM 2)
NASA Technical Reports Server (NTRS)
Shen, S.; Morris, D. W.
1983-01-01
The simulation language for alternate modeling (SLAM 2) is a general purpose language that combines network, discrete event, and continuous modeling capabilities in a single language system. The efficacy of the system's network modeling is examined and discussed. Examples are given of the symbolism that is used, and an example problem and model are derived. The results are discussed in terms of the ease of programming, special features, and system limitations. The system offers many features which allow rapid model development and provides an informative standardized output. The system also has limitations which may cause undetected errors and misleading reports unless the user is aware of these programming characteristics.
A Percolation Model for Fracking
NASA Astrophysics Data System (ADS)
Norris, J. Q.; Turcotte, D. L.; Rundle, J. B.
2014-12-01
Developments in fracking technology have enabled the recovery of vast reserves of oil and gas; yet, there is very little publicly available scientific research on fracking. Traditional reservoir simulator models for fracking are computationally expensive, and require many hours on a supercomputer to simulate a single fracking treatment. We have developed a computationally inexpensive percolation model for fracking that can be used to understand the processes and risks associated with fracking. In our model, a fluid is injected from a single site and a network of fractures grows from the single site. The fracture network grows in bursts, the failure of a relatively strong bond followed by the failure of a series of relatively weak bonds. These bursts display similarities to micro seismic events observed during a fracking treatment. The bursts follow a power-law (Gutenburg-Richter) frequency-size distribution and have growth rates similar to observed earthquake moment rates. These are quantifiable features that can be compared to observed microseismicity to help understand the relationship between observed microseismicity and the underlying fracture network.
Austin, Peter C; Anderson, Geoffrey M; Cigsar, Candemir; Gruneir, Andrea
2012-01-01
Purpose Observational studies using electronic administrative healthcare databases are often used to estimate the effects of treatments and exposures. Traditionally, a cohort design has been used to estimate these effects, but increasingly, studies are using a nested case–control (NCC) design. The relative statistical efficiency of these two designs has not been examined in detail. Methods We used Monte Carlo simulations to compare these two designs in terms of the bias and precision of effect estimates. We examined three different settings: (A) treatment occurred at baseline, and there was a single outcome of interest; (B) treatment was time varying, and there was a single outcome; and C treatment occurred at baseline, and there was a secondary event that competed with the primary event of interest. Comparisons were made of percentage bias, length of 95% confidence interval, and mean squared error (MSE) as a combined measure of bias and precision. Results In Setting A, bias was similar between designs, but the cohort design was more precise and had a lower MSE in all scenarios. In Settings B and C, the cohort design was more precise and had a lower MSE in all scenarios. In both Settings B and C, the NCC design tended to result in estimates with greater bias compared with the cohort design. Conclusions We conclude that in a range of settings and scenarios, the cohort design is superior in terms of precision and MSE. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22653805
NASA Technical Reports Server (NTRS)
Scheick, Leif
2014-01-01
Single-event-effect test results for hi-rel total-dose-hardened power MOSFETs are presented in this report. The 2N7616 and the 2N7425 from Semicoa and the 2N7480 from International Rectifier were tested to NASA test condition standards and requirements. The 2N7480 performed well and the data agree with the manufacture's data. The 2N7616 and 2N7425 were entry parts from Semicoa using a new device architecture. Unfortunately, the device performed poorly and Semicoa is withdrawing power MOSFETs from it line due to these data. Vertical metal-oxide-semiconductor field-effect transistors (MOSFETs) are the most commonly used power transistor. MOSFETs are typically employed in power supplies and high current switching applications. Due to the inherent high electric fields in the device, power MOSFETs are sensitive to heavy ion irradiation and can fail catastrophically as a result of single-event gate rupture (SEGR) or single-event burnout (SEB). Manufacturers have designed radiation-hardened power MOSFETs for space applications. See [1] through [5] for more information. The objective of this effort was to investigate the SEGR and SEB responses of two power MOSFETs recently produced. These tests will serve as a limited verification of these parts. It is acknowledged that further testing on the respective parts may be needed for some mission profiles.
NASA Astrophysics Data System (ADS)
Koshiishi, H.; Kimoto, Y.; Matsumoto, H.; Goka, T.
The Tsubasa satellite developed by the Japan Aerospace Exploration Agency was launched in Feb 2002 into Geo-stationary Transfer Orbit GTO Perigee 500km Apogee 36000km and had been operated well until Sep 2003 The objective of this satellite was to verify the function of commercial parts and new technologies of bus-system components in space Thus the on-board experiments were conducted in the more severe radiation environment of GTO rather than in Geo-stationary Earth Orbit GEO or Low Earth Orbit LEO The Space Environment Data Acquisition equipment SEDA on board the Tsubasa satellite had the Single-event Upset Monitor SUM and the DOSimeter DOS to evaluate influences on electronic devices caused by radiation environment that was also measured by the particle detectors of the SEDA the Standard DOse Monitor SDOM for measurements of light particles and the Heavy Ion Telescope HIT for measurements of heavy ions The SUM monitored single-event upsets and single-event latch-ups occurred in the test sample of two 64-Mbit DRAMs The DOS measured accumulated radiation dose at fifty-six locations in the body of the Tsubasa satellite Using the data obtained by these instruments single-event and total-dose effects in GTO during solar-activity maximum period especially their rapid changes due to solar flares and CMEs in the region from L 1 1 through L 11 is discussed in this paper
NASA Astrophysics Data System (ADS)
Giannaros, Christos; Nenes, Athanasios; Giannaros, Theodore M.; Kourtidis, Konstantinos; Melas, Dimitrios
2018-03-01
This study presents a comprehensive modeling approach for simulating the spatiotemporal distribution of urban air temperatures with a modeling system that includes the Weather Research and Forecasting (WRF) model and the Single-Layer Urban Canopy Model (SLUCM) with a modified treatment of the impervious surface temperature. The model was applied to simulate a 3-day summer heat wave event over the city of Athens, Greece. The simulation, using default SLUCM parameters, is capable of capturing the observed diurnal variation of urban temperatures and the Urban Heat Island (UHI) in the greater Athens Area (GAA), albeit with systematic biases that are prominent during nighttime hours. These biases are particularly evident over low-intensity residential areas, and they are associated with the surface and urban canopy properties representing the urban environment. A series of sensitivity simulations unravels the importance of the sub-grid urban fraction parameter, surface albedo, and street canyon geometry in the overall causation and development of the UHI effect. The sensitivities are then used to determine optimal values of the street canyon geometry, which reproduces the observed temperatures throughout the simulation domain. The optimal parameters, apart from considerably improving model performance (reductions in mean temperature bias from 0.30 °C to 1.58 °C), are also consistent with actual city building characteristics - which gives confidence that the model set-up is robust, and can be used to study the UHI in the GAA in the anticipated warmer conditions in the future.
Effects of data assimilation on the global aerosol key optical properties simulations
NASA Astrophysics Data System (ADS)
Yin, Xiaomei; Dai, Tie; Schutgens, Nick A. J.; Goto, Daisuke; Nakajima, Teruyuki; Shi, Guangyu
2016-09-01
We present the one month results of global aerosol optical properties for April 2006, using the Spectral Radiation Transport Model for Aerosol Species (SPRINTARS) coupled with the Non-hydrostatic ICosahedral Atmospheric Model (NICAM), by assimilating Moderate Resolution Imaging Spectroradiometer (MODIS) aerosol optical depth (AOD) with Local Ensemble Transform Kalman Filter (LETKF). The simulated AOD, Ångström Exponent (AE) and single scattering albedo (SSA) are validated by independent Aerosol Robotic Network (AERONET) observations over the global sites. The data assimilation has the strongest positive effect on the AOD simulation and slight positive influences on the AE and SSA simulations. For the time-averaged globally spatial distribution, the data assimilation increases the model skill score (S) of AOD, AE, and SSA from 0.55, 0.92, and 0.75 to 0.79, 0.94, and 0.80, respectively. Over the North Africa (NAF) and Middle East region where the aerosol composition is simple (mainly dust), the simulated AODs are best improved by the data assimilation, indicating the assimilation correctly modifies the wrong dust burdens caused by the uncertainties of the dust emission parameterization. Assimilation also improves the simulation of the temporal variations of the aerosol optical properties over the AERONET sites, with improved S at 60 (62%), 45 (55%) and 11 (50%) of 97, 82 and 22 sites for AOD, AE and SSA. By analyzing AOD and AE at five selected sites with best S improvement, this study further indicates that the assimilation can reproduce short duration events and ratios between fine and coarse aerosols more accurately.
Davis, Bradley; Welch, Katherine; Walsh-Hart, Sharon; Hanseman, Dennis; Petro, Michael; Gerlach, Travis; Dorlac, Warren; Collins, Jocelyn; Pritts, Timothy
2014-08-01
Critical Care Air Transport Teams (CCATTs) are a critical component of the United States Air Force evacuation paradigm. This study was conducted to assess the incidence of task saturation in simulated CCATT missions and to determine if there are predictable performance domains. Sixteen CCATTs were studied over a 6-month period. Performance was scored using a tool assessing eight domains of performance. Teams were also assessed during critical events to determine the presence or absence of task saturation and its impact on patient care. Sixteen simulated missions were reviewed and 45 crisis events identified. Task saturation was present in 22/45 (49%) of crisis events. Scoring demonstrated that task saturation was associated with poor performance in teamwork (odds ratio [OR] = 1.96), communication (OR = 2.08), and mutual performance monitoring (OR = 1.9), but not maintenance of guidelines, task management, procedural skill, and equipment management. We analyzed the effect of task saturation on adverse patient outcomes during crisis events. Adverse outcomes occurred more often when teams were task saturated as compared to non-task-saturated teams (91% vs. 23%; RR 4.1, p < 0.0001). Task saturation is observed in simulated CCATT missions. Nontechnical skills correlate with task saturation. Task saturation is associated with worsening physiologic derangements in simulated patients. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.
NASA Technical Reports Server (NTRS)
Wang, Jih-Jong; Cronquist, Brian E.; McGowan, John E.; Katz, Richard B.
1997-01-01
The goals for a radiation hardened (RAD-HARD) and high reliability (HI-REL) field programmable gate array (FPGA) are described. The first qualified manufacturer list (QML) radiation hardened RH1280 and RH1020 were developed. The total radiation dose and single event effects observed on the antifuse FPGA RH1280 are reported on. Tradeoffs and the limitations in the single event upset hardening are discussed.
Modelling approaches: the case of schizophrenia.
Heeg, Bart M S; Damen, Joep; Buskens, Erik; Caleo, Sue; de Charro, Frank; van Hout, Ben A
2008-01-01
Schizophrenia is a chronic disease characterized by periods of relative stability interrupted by acute episodes (or relapses). The course of the disease may vary considerably between patients. Patient histories show considerable inter- and even intra-individual variability. We provide a critical assessment of the advantages and disadvantages of three modelling techniques that have been used in schizophrenia: decision trees, (cohort and micro-simulation) Markov models and discrete event simulation models. These modelling techniques are compared in terms of building time, data requirements, medico-scientific experience, simulation time, clinical representation, and their ability to deal with patient heterogeneity, the timing of events, prior events, patient interaction, interaction between co-variates and variability (first-order uncertainty). We note that, depending on the research question, the optimal modelling approach should be selected based on the expected differences between the comparators, the number of co-variates, the number of patient subgroups, the interactions between co-variates, and simulation time. Finally, it is argued that in case micro-simulation is required for the cost-effectiveness analysis of schizophrenia treatments, a discrete event simulation model is best suited to accurately capture all of the relevant interdependencies in this chronic, highly heterogeneous disease with limited long-term follow-up data.
Improving the Representation of Snow Crystal Properties with a Single-Moment Mircophysics Scheme
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Demek, Scott R.
2010-01-01
Single-moment microphysics schemes are utilized in an increasing number of applications and are widely available within numerical modeling packages, often executed in near real-time to aid in the issuance of weather forecasts and advisories. In order to simulate cloud microphysical and precipitation processes, a number of assumptions are made within these schemes. Snow crystals are often assumed to be spherical and of uniform density, and their size distribution intercept may be fixed to simplify calculation of the remaining parameters. Recently, the Canadian CloudSat/CALIPSO Validation Project (C3VP) provided aircraft observations of snow crystal size distributions and environmental state variables, sampling widespread snowfall associated with a passing extratropical cyclone on 22 January 2007. Aircraft instrumentation was supplemented by comparable surface estimations and sampling by two radars: the C-band, dual-polarimetric radar in King City, Ontario and the NASA CloudSat 94 GHz Cloud Profiling Radar. As radar systems respond to both hydrometeor mass and size distribution, they provide value when assessing the accuracy of cloud characteristics as simulated by a forecast model. However, simulation of the 94 GHz radar signal requires special attention, as radar backscatter is sensitive to the assumed crystal shape. Observations obtained during the 22 January 2007 event are used to validate assumptions of density and size distribution within the NASA Goddard six-class single-moment microphysics scheme. Two high resolution forecasts are performed on a 9-3-1 km grid, with C3VP-based alternative parameterizations incorporated and examined for improvement. In order to apply the CloudSat 94 GHz radar to model validation, the single scattering characteristics of various crystal types are used and demonstrate that the assumption of Mie spheres is insufficient for representing CloudSat reflectivity derived from winter precipitation. Furthermore, snow density and size distribution characteristics are allowed to vary with height, based upon direct aircraft estimates obtained from C3VP data. These combinations improve the representation of modeled clouds versus their radar-observed counterparts, based on profiles and vertical distributions of reflectivity. These meteorological events are commonplace within the mid-latitude cold season and present a challenge to operational forecasters. This study focuses on one event, likely representative of others during the winter season, and aims to improve the representation of snow for use in future operational forecasts.
Henricksen, Jared W; Altenburg, Catherine; Reeder, Ron W
2017-10-01
Despite efforts to prepare a psychologically safe environment, simulation participants are occasionally psychologically distressed. Instructing simulation educators about participant psychological risks and having a participant psychological distress action plan available to simulation educators may assist them as they seek to keep all participants psychologically safe. A Simulation Participant Psychological Safety Algorithm was designed to aid simulation educators as they debrief simulation participants perceived to have psychological distress and categorize these events as mild (level 1), moderate (level 2), or severe (level 3). A prebrief dedicated to creating a psychologically safe learning environment was held constant. The algorithm was used for 18 months in an active pediatric simulation program. Data collected included level of participant psychological distress as perceived and categorized by the simulation team using the algorithm, type of simulation that participants went through, who debriefed, and timing of when psychological distress was perceived to occur during the simulation session. The Kruskal-Wallis test was used to evaluate the relationship between events and simulation type, events and simulation educator team who debriefed, and timing of event during the simulation session. A total of 3900 participants went through 399 simulation sessions between August 1, 2014, and January 26, 2016. Thirty-four (<1%) simulation participants from 27 sessions (7%) were perceived to have an event. One participant was perceived to have a severe (level 3) psychological distress event. Events occurred more commonly in high-intensity simulations, with novice learners and with specific educator teams. Simulation type and simulation educator team were associated with occurrence of events (P < 0.001). There was no association between event timing and event level. Severe psychological distress as categorized by simulation personnel using the Simulation Participant Psychological Safety Algorithm is rare, with mild and moderate events being more common. The algorithm was used to teach simulation educators how to assist a participant who may be psychologically distressed and document perceived event severity.
Predator-prey models with component Allee effect for predator reproduction.
Terry, Alan J
2015-12-01
We present four predator-prey models with component Allee effect for predator reproduction. Using numerical simulation results for our models, we describe how the customary definitions of component and demographic Allee effects, which work well for single species models, can be extended to predators in predator-prey models by assuming that the prey population is held fixed. We also find that when the prey population is not held fixed, then these customary definitions may lead to conceptual problems. After this discussion of definitions, we explore our four models, analytically and numerically. Each of our models has a fixed point that represents predator extinction, which is always locally stable. We prove that the predator will always die out either if the initial predator population is sufficiently small or if the initial prey population is sufficiently small. Through numerical simulations, we explore co-existence fixed points. In addition, we demonstrate, by simulation, the existence of a stable limit cycle in one of our models. Finally, we derive analytical conditions for a co-existence trapping region in three of our models, and show that the fourth model cannot possess a particular kind of co-existence trapping region. We punctuate our results with comments on their real-world implications; in particular, we mention the possibility of prey resurgence from mortality events, and the possibility of failure in a biological pest control program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zuwei; Zhao, Haibo, E-mail: klinsmannzhb@163.com; Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule providesmore » a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are demonstrated in a physically realistic Brownian coagulation case. The computational accuracy is validated with benchmark solution of discrete-sectional method. The simulation results show that the comprehensive approach can attain very favorable improvement in cost without sacrificing computational accuracy.« less
NASA Technical Reports Server (NTRS)
Iguchi, Takamichi; Nakajima, Teruyuki; Khain, Alexander P.; Saito, Kazuo; Takemura, Toshihiko; Okamoto, Hajime; Nishizawa, Tomoaki; Tao, Wei-Kuo
2012-01-01
Numerical weather prediction (NWP) simulations using the Japan Meteorological Agency NonhydrostaticModel (JMA-NHM) are conducted for three precipitation events observed by shipborne or spaceborneW-band cloud radars. Spectral bin and single-moment bulk cloud microphysics schemes are employed separatelyfor an intercomparative study. A radar product simulator that is compatible with both microphysicsschemes is developed to enable a direct comparison between simulation and observation with respect to theequivalent radar reflectivity factor Ze, Doppler velocity (DV), and path-integrated attenuation (PIA). Ingeneral, the bin model simulation shows better agreement with the observed data than the bulk modelsimulation. The correction of the terminal fall velocities of snowflakes using those of hail further improves theresult of the bin model simulation. The results indicate that there are substantial uncertainties in the masssizeand sizeterminal fall velocity relations of snowflakes or in the calculation of terminal fall velocity of snowaloft. For the bulk microphysics, the overestimation of Ze is observed as a result of a significant predominanceof snow over cloud ice due to substantial deposition growth directly to snow. The DV comparison shows thata correction for the fall velocity of hydrometeors considering a change of particle size should be introducedeven in single-moment bulk cloud microphysics.
Demeke, Tigst; Eng, Monika
2018-05-01
Droplet digital PCR (ddPCR) has been used for absolute quantification of genetically engineered (GE) events. Absolute quantification of GE events by duplex ddPCR requires the use of appropriate primers and probes for target and reference gene sequences in order to accurately determine the amount of GE materials. Single copy reference genes are generally preferred for absolute quantification of GE events by ddPCR. Study has not been conducted on a comparison of reference genes for absolute quantification of GE canola events by ddPCR. The suitability of four endogenous reference sequences ( HMG-I/Y , FatA(A), CruA and Ccf) for absolute quantification of GE canola events by ddPCR was investigated. The effect of DNA extraction methods and DNA quality on the assessment of reference gene copy numbers was also investigated. ddPCR results were affected by the use of single vs. two copy reference genes. The single copy, FatA(A), reference gene was found to be stable and suitable for absolute quantification of GE canola events by ddPCR. For the copy numbers measured, the HMG-I/Y reference gene was less consistent than FatA(A) reference gene. The expected ddPCR values were underestimated when CruA and Ccf (two copy endogenous Cruciferin sequences) were used because of high number of copies. It is important to make an adjustment if two copy reference genes are used for ddPCR in order to obtain accurate results. On the other hand, real-time quantitative PCR results were not affected by the use of single vs. two copy reference genes.
Kongnakorn, Thitima; Sterchele, James A; Salvador, Christopher G; Getsios, Denis; Mwamburi, Mkaya
2014-01-01
The objective of this analysis was to evaluate the cost-effectiveness of using bendamustine versus alemtuzumab or bendamustine versus chlorambucil as a first-line therapy in patients with Binet stage B or C chronic lymphocytic leukemia (CLL) in the US. A discrete event simulation of the disease course of CLL was developed to evaluate the economic implications of single-agent treatment with bendamustine, alemtuzumab, or chlorambucil, which are indicated for a treatment-naïve patient population with Binet stage B or C CLL. Data from clinical trials were used to create a simulated patient population, risk equations for progression-free survival and survival post disease progression, response rates, and rates of adverse events. Costs from a US health care payer perspective in 2012 US dollars, survival (life years), and quality-adjusted life years (QALYs) were estimated over a patient's lifetime; all were discounted at 3% per year. Compared with alemtuzumab, bendamustine was considered to be a dominant treatment providing greater benefit (6.10 versus 5.37 life years and 4.02 versus 3.45 QALYs) at lower cost ($78,776 versus $121,441). Compared with chlorambucil, bendamustine was associated with higher costs ($78,776 versus $42,337) but with improved health outcomes (6.10 versus 5.21 life years and 4.02 versus 3.30 QALYs), resulting in incremental cost-effectiveness ratios of $40,971 per life year gained and $50,619 per QALY gained. Bendamustine is expected to provide cost savings and greater health benefit than alemtuzumab in treatment-naïve patients with CLL. Furthermore, it can be considered as a cost-effective treatment providing health benefits at an acceptable cost versus chlorambucil in the US.
Crystal nucleation of colloidal hard dumbbells
NASA Astrophysics Data System (ADS)
Ni, Ran; Dijkstra, Marjolein
2011-01-01
Using computer simulations, we investigate the homogeneous crystal nucleation in suspensions of colloidal hard dumbbells. The free energy barriers are determined by Monte Carlo simulations using the umbrella sampling technique. We calculate the nucleation rates for the plastic crystal and the aperiodic crystal phase using the kinetic prefactor as determined from event driven molecular dynamics simulations. We find good agreement with the nucleation rates determined from spontaneous nucleation events observed in event driven molecular dynamics simulations within error bars of one order of magnitude. We study the effect of aspect ratio of the dumbbells on the nucleation of plastic and aperiodic crystal phases, and we also determine the structure of the critical nuclei. Moreover, we find that the nucleation of the aligned close-packed crystal structure is strongly suppressed by a high free energy barrier at low supersaturations and slow dynamics at high supersaturations.
Soil erodibility variability in laboratory and field rainfall simulations
NASA Astrophysics Data System (ADS)
Szabó, Boglárka; Szabó, Judit; Jakab, Gergely; Centeri, Csaba; Szalai, Zoltán
2017-04-01
Rainfall simulation experiments are the most common way to observe and to model the soil erosion processes in in situ and ex situ circumstances. During modelling soil erosion, one of the most important factors are the annual soil loss and the soil erodibility which represent the effect of soil properties on soil loss and the soil resistance against water erosion. The amount of runoff and soil loss can differ in case of the same soil type, while it's characteristics determine the soil erodibility factor. This leads to uncertainties regarding soil erodibility. Soil loss and soil erodibility were examined with the investigation of the same soil under laboratory and field conditions with rainfall simulators. The comparative measurement was carried out in a laboratory on 0,5 m2, and in the field (Shower Power-02) on 6 m2 plot size where the applied slope angles were 5% and 12% with 30 and 90 mm/h rainfall intensity. The main idea was to examine and compare the soil erodibility and its variability coming from the same soil, but different rainfall simulator type. The applied model was the USLE, nomograph and other equations which concern single rainfall events. The given results show differences between the field and laboratory experiments and between the different calculations. Concerning for the whole rainfall events runoff and soil loss, were significantly higher at the laboratory experiments, which affected the soil erodibility values too. The given differences can originate from the plot size. The main research questions are that: How should we handle the soil erodibility factors and its significant variability? What is the best solution for soil erodibility determination?
SIERRA - A 3-D device simulator for reliability modeling
NASA Astrophysics Data System (ADS)
Chern, Jue-Hsien; Arledge, Lawrence A., Jr.; Yang, Ping; Maeda, John T.
1989-05-01
SIERRA is a three-dimensional general-purpose semiconductor-device simulation program which serves as a foundation for investigating integrated-circuit (IC) device and reliability issues. This program solves the Poisson and continuity equations in silicon under dc, transient, and small-signal conditions. Executing on a vector/parallel minisupercomputer, SIERRA utilizes a matrix solver which uses an incomplete LU (ILU) preconditioned conjugate gradient square (CGS, BCG) method. The ILU-CGS method provides a good compromise between memory size and convergence rate. The authors have observed a 5x to 7x speedup over standard direct methods in simulations of transient problems containing highly coupled Poisson and continuity equations such as those found in reliability-oriented simulations. The application of SIERRA to parasitic CMOS latchup and dynamic random-access memory single-event-upset studies is described.
Remote sensing of aerosol plumes: a semianalytical model
NASA Astrophysics Data System (ADS)
Alakian, Alexandre; Marion, Rodolphe; Briottet, Xavier
2008-04-01
A semianalytical model, named APOM (aerosol plume optical model) and predicting the radiative effects of aerosol plumes in the spectral range [0.4,2.5 μm], is presented in the case of nadir viewing. It is devoted to the analysis of plumes arising from single strong emission events (high optical depths) such as fires or industrial discharges. The scene is represented by a standard atmosphere (molecules and natural aerosols) on which a plume layer is added at the bottom. The estimated at-sensor reflectance depends on the atmosphere without plume, the solar zenith angle, the plume optical properties (optical depth, single-scattering albedo, and asymmetry parameter), the ground reflectance, and the wavelength. Its mathematical expression as well as its numerical coefficients are derived from MODTRAN4 radiative transfer simulations. The DISORT option is used with 16 fluxes to provide a sufficiently accurate calculation of multiple scattering effects that are important for dense smokes. Model accuracy is assessed by using a set of simulations performed in the case of biomass burning and industrial plumes. APOM proves to be accurate and robust for solar zenith angles between 0° and 60° whatever the sensor altitude, the standard atmosphere, for plume phase functions defined from urban and rural models, and for plume locations that extend from the ground to a height below 3 km. The modeling errors in the at-sensor reflectance are on average below 0.002. They can reach values of 0.01 but correspond to low relative errors then (below 3% on average). This model can be used for forward modeling (quick simulations of multi/hyperspectral images and help in sensor design) as well as for the retrieval of the plume optical properties from remotely sensed images.
NASA Astrophysics Data System (ADS)
Crespo, Paulo; Reis, João; Couceiro, Miguel; Blanco, Alberto; Ferreira, Nuno C.; Marques, Rui Ferreira; Martins, Paulo; Fonte, Paulo
2012-06-01
A single-bed, whole-body positron emission tomograph based on resistive plate chambers has been proposed (RPC-PET). An RPC-PET system with an axial field-of-view (AFOV) of 2.4 m has been shown in simulation to have higher system sensitivity using the NEMA NU2-1994 protocol than commercial PET scanners. However, that protocol does not correlate directly with lesion detectability. The latter is better correlated with the planar (slice) sensitivity, obtained with a NEMA NU2-2001 line-source phantom. After validation with published data for the GE Advance, Siemens TruePoint and TrueV, we study by simulation their axial sensitivity profiles, comparing results with RPC-PET. Planar sensitivities indicate that RPC-PET is expected to outperform 16-cm (22-cm) AFOV scanners by a factor 5.8 (3.0) for 70-cm-long scans. For 1.5-m scans (head to mid-legs), the sensitivity gain increases to 11.7 (6.7). Yet, PET systems with large AFOV provide larger coverage but also larger attenuation in the object. We studied these competing effects with both spherical- and line-sources immersed in a 27-cm-diameter water cylinder. For 1.5-m-long scans, the planar sensitivity drops one order of magnitude in all scanners, with RPC-PET outperforming 16-cm (22-cm) AFOV scanners by a factor 9.2 (5.3) without considering the TOF benefit. A gain in the effective sensitivity is expected with TOF iterative reconstruction. Finally, object scatter in an anthropomorphic phantom is similar for RPC-PET and modern, scintillator-based scanners, although RPC-PET benefits further if its TOF information is utilized to exclude scatter events occurring outside the anthropomorphic phantom.
Remote sensing of aerosol plumes: a semianalytical model.
Alakian, Alexandre; Marion, Rodolphe; Briottet, Xavier
2008-04-10
A semianalytical model, named APOM (aerosol plume optical model) and predicting the radiative effects of aerosol plumes in the spectral range [0.4,2.5 microm], is presented in the case of nadir viewing. It is devoted to the analysis of plumes arising from single strong emission events (high optical depths) such as fires or industrial discharges. The scene is represented by a standard atmosphere (molecules and natural aerosols) on which a plume layer is added at the bottom. The estimated at-sensor reflectance depends on the atmosphere without plume, the solar zenith angle, the plume optical properties (optical depth, single-scattering albedo, and asymmetry parameter), the ground reflectance, and the wavelength. Its mathematical expression as well as its numerical coefficients are derived from MODTRAN4 radiative transfer simulations. The DISORT option is used with 16 fluxes to provide a sufficiently accurate calculation of multiple scattering effects that are important for dense smokes. Model accuracy is assessed by using a set of simulations performed in the case of biomass burning and industrial plumes. APOM proves to be accurate and robust for solar zenith angles between 0 degrees and 60 degrees whatever the sensor altitude, the standard atmosphere, for plume phase functions defined from urban and rural models, and for plume locations that extend from the ground to a height below 3 km. The modeling errors in the at-sensor reflectance are on average below 0.002. They can reach values of 0.01 but correspond to low relative errors then (below 3% on average). This model can be used for forward modeling (quick simulations of multi/hyperspectral images and help in sensor design) as well as for the retrieval of the plume optical properties from remotely sensed images.
2017-03-01
activities, as well as other causes of sedimentation (e.g., agricultural practices, storm events, tidal flows). BACKGROUND AND PROBLEM: Many naturally...effects originating from many sources (e.g., agriculture , storm event, tidal flows) on multiple aquatic species and life stages. Multiple experimental
Divergent Thinking and Constructing Episodic Simulations
Addis, Donna Rose; Pan, Ling; Musicaro, Regina; Schacter, Daniel L.
2014-01-01
Divergent thinking likely plays an important role in simulating autobiographical events. We investigated whether divergent thinking is differentially associated with the ability to construct detailed imagined future and imagined past events as opposed to recalling past events. We also examined whether age differences in divergent thinking might underlie the reduced episodic detail generated by older adults. The richness of episodic detail comprising autobiographical events in young and older adults was assessed using the Autobiographical Interview. Divergent thinking abilities were measured using the Alternate Uses Task. Divergent thinking was significantly associated with the amount of episodic detail for imagined future events. Moreover, while age was significantly associated with imagined episodic detail, this effect was strongly related to age-related changes in episodic retrieval rather than divergent thinking. PMID:25483132
Divergent thinking and constructing episodic simulations.
Addis, Donna Rose; Pan, Ling; Musicaro, Regina; Schacter, Daniel L
2016-01-01
Divergent thinking likely plays an important role in simulating autobiographical events. We investigated whether divergent thinking is differentially associated with the ability to construct detailed imagined future and imagined past events as opposed to recalling past events. We also examined whether age differences in divergent thinking might underlie the reduced episodic detail generated by older adults. The richness of episodic detail comprising autobiographical events in young and older adults was assessed using the Autobiographical Interview. Divergent thinking abilities were measured using the Alternative Uses Task. Divergent thinking was significantly associated with the amount of episodic detail for imagined future events. Moreover, while age was significantly associated with imagined episodic detail, this effect was strongly related to age-related changes in episodic retrieval rather than divergent thinking.
Simulation of Aircraft Engine Blade-Out Structural Dynamics
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Carney, Kelly; Gallardo, Vicente
2001-01-01
A primary concern of aircraft structure designers is the accurate simulation of the blade-out event and the subsequent windmilling of the engine. Reliable simulations of the blade-out event are required to insure structural integrity during flight as well as to guarantee successful blade-out certification testing. The system simulation includes the lost blade loadings and the interactions between the rotating turbomachinery and the remaining aircraft structural components. General-purpose finite element structural analysis codes such as MSC NASTRAN are typically used and special provisions are made to include transient effects from the blade loss and rotational effects resulting from the engine's turbomachinery. The present study provides the equations of motion for rotordynamic response including the effect of spooldown speed and rotor unbalance and examines the effects of these terms on a cantilevered rotor. The effect of spooldown speed is found to be greater with increasing spooldown rate. The parametric term resulting from the mass unbalance has a more significant effect on the rotordynamic response than does the spooldown term. The parametric term affects both the peak amplitudes as well as the resonant frequencies of the rotor.
Simulation of Aircraft Engine Blade-Out Structural Dynamics. Revised
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Carney, Kelly; Gallardo, Vicente
2001-01-01
A primary concern of aircraft structure designers is the accurate simulation of the blade-out event and the subsequent windmilling of the engine. Reliable simulations of the blade-out event are required to insure structural integrity during flight as well as to guarantee successful blade-out certification testing. The system simulation includes the lost blade loadings and the interactions between the rotating turbomachinery and the remaining aircraft structural components. General-purpose finite element structural analysis codes such as MSC NASTRAN are typically used and special provisions are made to include transient effects from the blade loss and rotational effects resulting from the engine's turbomachinery. The present study provides the equations of motion for rotordynamic response including the effect of spooldown speed and rotor unbalance and examines the effects of these terms on a cantilevered rotor. The effect of spooldown speed is found to be greater with increasing spooldown rate. The parametric term resulting from the mass unbalance has a more significant effect on the rotordynamic response than does the spooldown term. The parametric term affects both the peak amplitudes as well as the resonant frequencies of the rotor.
Adaptive Stress Testing of Airborne Collision Avoidance Systems
NASA Technical Reports Server (NTRS)
Lee, Ritchie; Kochenderfer, Mykel J.; Mengshoel, Ole J.; Brat, Guillaume P.; Owen, Michael P.
2015-01-01
This paper presents a scalable method to efficiently search for the most likely state trajectory leading to an event given only a simulator of a system. Our approach uses a reinforcement learning formulation and solves it using Monte Carlo Tree Search (MCTS). The approach places very few requirements on the underlying system, requiring only that the simulator provide some basic controls, the ability to evaluate certain conditions, and a mechanism to control the stochasticity in the system. Access to the system state is not required, allowing the method to support systems with hidden state. The method is applied to stress test a prototype aircraft collision avoidance system to identify trajectories that are likely to lead to near mid-air collisions. We present results for both single and multi-threat encounters and discuss their relevance. Compared with direct Monte Carlo search, this MCTS method performs significantly better both in finding events and in maximizing their likelihood.
Kang, Hyojung; Orlowsky, Rachel L; Gerling, Gregory J
2017-12-01
In mammals, touch is encoded by sensory receptors embedded in the skin. For one class of receptors in the mouse, the architecture of its Merkel cells, unmyelinated neurites, and heminodes follow particular renewal and remodeling trends over hair cycle stages from ages 4 to 10 weeks. As it is currently impossible to observe such trends across a single animal's hair cycle, this work employs discrete event simulation to identify and evaluate policies of Merkel cell and heminode dynamics. Well matching the observed data, the results show that the baseline model replicates dynamic remodeling behaviors between stages of the hair cycle - based on particular addition and removal polices and estimated probabilities tied to constituent parts of Merkel cells, terminal branch neurites and heminodes. The analysis shows further that certain policies hold greater influence than others. This use of computation is a novel approach to understanding neuronal development.
Prediction of Intensity Change Subsequent to Concentric Eyewall Events
NASA Astrophysics Data System (ADS)
Mauk, Rachel Grant
Concentric eyewall events have been documented numerous times in intense tropical cyclones over the last two decades. During a concentric eyewall event, an outer (secondary) eyewall forms around the inner (primary) eyewall. Improved instrumentation on aircraft and satellites greatly increases the likelihood of detecting an event. Despite the increased ability to detect such events, forecasts of intensity changes during and after these events remain poor. When concentric eyewall events occur near land, accurate intensity change predictions are especially critical to ensure proper emergency preparations and staging of recovery assets. A nineteen-year (1997-2015) database of concentric eyewall events is developed by analyzing microwave satellite imagery, aircraft- and land-based radar, and other published documents. Events are identified in both the North Atlantic and eastern North Pacific basins. TCs are categorized as single (1 event), serial (>= 2 events) and super-serial (>= 3 events). Key findings here include distinct spatial patterns for single and serial Atlantic TCs, a broad seasonal distribution for eastern North Pacific TCs, and apparent ENSO-related variability in both basins. The intensity change subsequent to the concentric eyewall event is calculated from the HURDAT2 database at time points relative to the start and to the end of the event. Intensity change is then categorized as Weaken (≤ -10 kt), Maintain (+/- 5 kt), and Strengthen (≥ 10 kt). Environmental conditions in which each event occurred are analyzed based on the SHIPS diagnostic files. Oceanic, dynamic, thermodynamic, and TC status predictors are selected for testing in a multiple discriminant analysis procedure to determine which variables successfully discriminate the intensity change category and the occurrence of additional concentric eyewall events. Intensity models are created for 12 h, 24 h, 36 h, and 48 h after the concentric eyewall events end. Leave-one-out cross validation is performed on each set of discriminators to generate classifications, which are then compared to observations. For each model, the top combinations achieve 80-95% overall accuracy in classifying TCs based on the environmental characteristics, although Maintain systems are frequently misclassified. The third part of this dissertation employs the Weather Research and Forecasting model to further investigate concentric eyewall events. Two serial Atlantic concentric eyewall cases (Katrina 2005 and Wilma 2005) are selected from the original study set, and WRF simulations performed using several model designs. Despite strong evidence from multiple sources that serial concentric eyewalls formed in both hurricanes, the WRF simulations did not produce identifiable concentric eyewall structures for Katrina, and only transient structures for Wilma. Possible reasons for the lack of concentric eyewall formation are discussed, including model resolution, microphysics, and data sources.
Unsteady Aerodynamic Simulations of a Finned Projectile at a Supersonic Speed With Jet Interaction
2014-06-01
20 4.4 Transient Effects During the Jet Event and Time-Accuracy of...35 Figure 27. Transient effects of jet maneuver event for the no initial angular...rate case. ................36 Figure 28. Effect of time step on the coupled solution for the initial low roll rate case: (a) roll rate, (b) roll angle
Effect of micro-topography and undrained shear strength on soil erosion
NASA Astrophysics Data System (ADS)
Todisco, Francesca; Vergni, Lorenzo; Vinci, Alessandra; Torri, Dino
2017-04-01
An experiment to evaluate the effect of the pre-event soil surface conditions on the dynamics of the interrill erosion process was performed at the Masse experimental station (Italy) in a replicated 1mx1m plot, located in a 16% slope in a silt-clay-loam soil equipped with a nozzle-type rainfall simulator. Two experiments was performed, each experiment started from a just ploughed bare surface and included 3 simulations (I, II and III in the first experiment and IV, V and VI in the second experiment) carried out in the range of few days. A 30 min pre-wetting phase ensures almost constant initial soil moisture (mean=31%, CV=5%) and bulk density (mean=1.3 g/cm3, CV=3%). Rainfall intensity was maintained constant (mean=67mm/h, CV=2.7%). The independent variables were the initial soil surface conditions that, progressively modified by the rainfall runoff process, were different for the three subsequent simulations. The soil surface initial and final micro-topography and undrained shear strength, T, were monitored through photogrammetric surveys (with I-Phone 6plus) and Torvane test (with pocket-torvane, obliged shear surface at 0.5 cm from soil surface, plate diameter 5 cm, 0.2186 full scale complete revolution 360°, test done on saturated soil surface, with water standing at the surface). Runoff, Q, runoff coefficient, Qr, soil loss, SL and sediment concentration, C, were measured every 5 min. The particle size distribution were also determined. During the simulations Q increases monotonically with typically concave trend. Almost similar consideration can be made for the other variables. A higher frequency of the roughness, RR, (i.e. vertical distance between the surface and a reference horizontal plane, obtained by removing the slope effect) lower than a fixed amount, was measured at the final than the initial step of each simulation and within the single experiment between successive simulations. Therefore, the roughness decreases along with the Q, SL and C increase. In general in the simulations equidistant from the plowing (I-IV, II-V, III-VI) the dynamic of Q, SL and C relative to the second experiment are slightly above that of the first experiment. Actually it is observed that although the frequency distributions of the initial RR of the first simulation of each experiment (I and IV) almost overlap, a higher frequency of the RR lower than a fixed amount was measured in the second experiment (the RR-V >RR-II and the RR-VI>RR-III). Higher T values were often measured at the final than the initial step of each simulation due to sealing and crusting processes associated with the surface smoothness. These and other results open interesting scenarios in the study of the dynamics of the erosion process with particular reference to the relationship between the characteristics of the soil surface and the climatic and hydrological forcing both at event and intra-event time scale. In addition, some results offer discussion points relative to the dynamics of the soil erodibility, showing that the concentration behavior cannot be fully explained by the runoff dynamics.
Schmidt, Robert L; Howard, Kirsten; Hall, Brian J; Layfield, Lester J
2012-12-01
Sample adequacy is an important aspect of overall fine-needle aspiration cytology (FNAC) performance. FNAC effectiveness is augmented by an increasing number of needle passes, but increased needle passes are associated with higher costs and greater risk of adverse events. The objective of this study was to compare the impact of several different sampling policies on FNAC effectiveness and adverse event rates using discrete event simulation. We compared 8 different sampling policies in 12 different sampling environments. All sampling policies were effective when the per-pass accuracy is high (>80%). Rapid on-site evaluation (ROSE) improves FNAC effectiveness when the per-pass adequacy rate is low. ROSE is unlikely to be cost-effective in sampling environments in which the per-pass adequacy is high. Alternative ROSE assessors (eg, cytotechnologists) may be a cost-effective alternative to pathologists when the per-pass adequacy rate is moderate (60%-80%) or when the number of needle passes is limited.
Simulating an Enactment Effect: Pronouns Guide Action Simulation during Narrative Comprehension
ERIC Educational Resources Information Center
Ditman, Tali; Brunye, Tad T.; Mahoney, Caroline R.; Taylor, Holly A.
2010-01-01
Recent research has suggested that reading involves the mental simulation of events and actions described in a text. It is possible however that previous findings did not tap into processes engaged during natural reading but rather those triggered by task demands. The present study examined whether readers spontaneously mentally simulate the…
Waves, Plumes and Bubbles from Jupiter Comet Impacts
NASA Astrophysics Data System (ADS)
Palotai, Csaba J.; Sankar, Ramanakumar; McCabe, Tyler; Korycansky, Donald
2017-10-01
We present results from our numerical simulations of jovian comet impacts that investigate various phases of the Shoemaker-Levy 9 (SL9) and the 2009 impacts into Jupiter's atmosphere. Our work includes a linked series of observationally constrained, three-dimensional radiative-hydrodynamic simulations to model the impact, plume blowout, plume flight/splash, and wave-propagation phases of those impact events. Studying these stages using a single model is challenging because the spatial and temporal scales and the temperature range of those phases may differ by orders of magnitudes (Harrington et al. 2004). In our simulations we model subsequent phases starting with the interpolation of the results of previous simulations onto a new, larger grid that is optimized for capturing all key physics of the relevant phenomena while maintaining computational efficiency. This enables us to carry out end-to-end simulations that require no ad-hoc initial conditions. In this work, we focus on the waves generated by various phenomena during the impact event and study the temporal evolution of their position and speed. In particular, we investigate the shocks generated by the impactor during atmospheric entry, the expansion of the ejected plume and the ascent of the hot bubble of material from terminal depth. These results are compared to the observed characteristics of the expanding SL9 rings (Hammel et al. 1995). Additionally, we present results from our sensitivity tests that focus on studying the differences in the ejecta plume generation using various impactor parameters (e.g., impact angle, impactor size, material, etc.). These simulations are used to explain various phenomena related to the SL9 event and to constrain the characteristics of the unknown 2009 impactor body. This research was supported by National Science Foundation Grant AST-1627409.
Pakmor, Rüdiger; Kromer, Markus; Röpke, Friedrich K; Sim, Stuart A; Ruiter, Ashley J; Hillebrandt, Wolfgang
2010-01-07
Type Ia supernovae are thought to result from thermonuclear explosions of carbon-oxygen white dwarf stars. Existing models generally explain the observed properties, with the exception of the sub-luminous 1991bg-like supernovae. It has long been suspected that the merger of two white dwarfs could give rise to a type Ia event, but hitherto simulations have failed to produce an explosion. Here we report a simulation of the merger of two equal-mass white dwarfs that leads to a sub-luminous explosion, although at the expense of requiring a single common-envelope phase, and component masses of approximately 0.9M[symbol: see text]. The light curve is too broad, but the synthesized spectra, red colour and low expansion velocities are all close to what is observed for sub-luminous 1991bg-like events. Although the mass ratios can be slightly less than one and still produce a sub-luminous event, the masses have to be in the range 0.83M[symbol: see text] to 0.9M[symbol: see text].
The Effects of a Duathlon Simulation on Ventilatory Threshold and Running Economy
Berry, Nathaniel T.; Wideman, Laurie; Shields, Edgar W.; Battaglini, Claudio L.
2016-01-01
Multisport events continue to grow in popularity among recreational, amateur, and professional athletes around the world. This study aimed to determine the compounding effects of the initial run and cycling legs of an International Triathlon Union (ITU) Duathlon simulation on maximal oxygen uptake (VO2max), ventilatory threshold (VT) and running economy (RE) within a thermoneutral, laboratory controlled setting. Seven highly trained multisport athletes completed three trials; Trial-1 consisted of a speed only VO2max treadmill protocol (SOVO2max) to determine VO2max, VT, and RE during a single-bout run; Trial-2 consisted of a 10 km run at 98% of VT followed by an incremental VO2max test on the cycle ergometer; Trial-3 consisted of a 10 km run and 30 km cycling bout at 98% of VT followed by a speed only treadmill test to determine the compounding effects of the initial legs of a duathlon on VO2max, VT, and RE. A repeated measures ANOVA was performed to determine differences between variables across trials. No difference in VO2max, VT (%VO2max), maximal HR, or maximal RPE was observed across trials. Oxygen consumption at VT was significantly lower during Trial-3 compared to Trial-1 (p = 0.01). This decrease was coupled with a significant reduction in running speed at VT (p = 0.015). A significant interaction between trial and running speed indicate that RE was significantly altered during Trial-3 compared to Trial-1 (p < 0.001). The first two legs of a laboratory based duathlon simulation negatively impact VT and RE. Our findings may provide a useful method to evaluate multisport athletes since a single-bout incremental treadmill test fails to reveal important alterations in physiological thresholds. Key points Decrease in relative oxygen uptake at VT (ml·kg-1·min-1) during the final leg of a duathlon simulation, compared to a single-bout maximal run. We observed a decrease in running speed at VT during the final leg of a duathlon simulation; resulting in an increase of more than 2 minutes to complete a 5 km run. During our study, highly trained athletes were unable to complete the final 5 km run at the same intensity that they completed the initial 10 km run (in a laboratory setting). A better understanding, and determination, of training loads during multisport training may help to better periodize training programs; additional research is required. PMID:27274661
Pena, Guilherme; Altree, Meryl; Field, John; Sainsbury, David; Babidge, Wendy; Hewett, Peter; Maddern, Guy
2015-07-01
The best surgeons demonstrate skills beyond those required for the performance of technically competent surgery. These skills are described under the term nontechnical skills. Failure in these domains has been associated with adverse events inside the operating room. These nontechnical skills are not learned commonly in a structured manner during surgery training. The main purpose of this study was to explore the effects of participation in simulation-based training, either as a sole strategy or as part of a combined approach on surgeons and surgical trainees nontechnical skills performance in simulation environment. The study consisted of a single-blinded, prospective comparative trial. Forty participants were enrolled, all participating in 2 simulation sessions challenging nontechnical skills comprising 3 surgical scenarios. Seventeen participants attended a 1-day, nontechnical skills workshop between simulation sessions. Scenarios were video-recorded for assessment and debriefing purposes. Assessment was made by 2 observers using the Non-Technical Skills for Surgeons (NOTSS) scoring system. There was a significant improvement in nontechnical skills performance of both groups from the first to the second simulation session, for 2 of the 3 scenarios. No difference in performance between the simulation and the simulation plus workshop groups was noted. This study provides evidence that formal training in nontechnical skills is feasible and can impact positively participants' nontechnical performance in a simulated environment. The addition of a 1-day didactic workshop does not seem to provide additional benefit over simulation-based training as a sole strategy for nontechnical skills training. Copyright © 2015 Elsevier Inc. All rights reserved.
Single Event Rates for Devices Sensitive to Particle Energy
NASA Technical Reports Server (NTRS)
Edmonds, L. D.; Scheick, L. Z.; Banker, M. W.
2012-01-01
Single event rates (SER) can include contributions from low-energy particles such that the linear energy transfer (LET) is not constant. Previous work found that the environmental description that is most relevant to the low-energy contribution to the rate is a "stopping rate per unit volume" even when the physical mechanisms for a single-event effect do not require an ion to stop in some device region. Stopping rate tables are presented for four heavy-ion environments that are commonly used to assess device suitability for space applications. A conservative rate estimate utilizing limited test data is derived, and the example of SEGR rate in a power MOSFET is presented.
SEE Sensitivity Analysis of 180 nm NAND CMOS Logic Cell for Space Applications
NASA Astrophysics Data System (ADS)
Sajid, Muhammad
2016-07-01
This paper focus on Single Event Effects caused by energetic particle strike on sensitive locations in CMOS NAND logic cell designed in 180nm technology node to be operated in space radiation environment. The generation of SE transients as well as upsets as function of LET of incident particle has been determined for logic devices onboard LEO and GEO satellites. The minimum magnitude pulse and pulse-width for threshold LET was determined to estimate the vulnerability /susceptibility of device for heavy ion strike. The impact of temperature, strike location and logic state of NAND circuit on total SEU/SET rate was estimated with physical mechanism simulations using Visual TCAD, Genius, runSEU program and Crad computer codes.
Self-adjusting threshold mechanism for pixel detectors
NASA Astrophysics Data System (ADS)
Heim, Timon; Garcia-Sciveres, Maurice
2017-09-01
Readout chips of hybrid pixel detectors use a low power amplifier and threshold discrimination to process charge deposited in semiconductor sensors. Due to transistor mismatch each pixel circuit needs to be calibrated individually to achieve response uniformity. Traditionally this is addressed by programmable threshold trimming in each pixel, but requires robustness against radiation effects, temperature, and time. In this paper a self-adjusting threshold mechanism is presented, which corrects the threshold for both spatial inequality and time variation and maintains a constant response. It exploits the electrical noise as relative measure for the threshold and automatically adjust the threshold of each pixel to always achieve a uniform frequency of noise hits. A digital implementation of the method in the form of an up/down counter and combinatorial logic filter is presented. The behavior of this circuit has been simulated to evaluate its performance and compare it to traditional calibration results. The simulation results show that this mechanism can perform equally well, but eliminates instability over time and is immune to single event upsets.
Development of a single-meal fish consumption advisory for methyl mercury
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ginsberg, G.L.; Toal, B.F.
2000-02-01
Methyl mercury (meHg) contamination of fish is the leading cause of fish consumption advisories in the US. These advisories have focused upon repeated or chronic exposure, whereas risks during pregnancy may also exist from a single-meal exposure if the fish tissue concentration is high enough. In this study, acute exposure to meHg from a single fish meal was analyzed by using the one-compartment meHg biokinetic model to predict maternal hair concentrations. These concentrations were evaluated against the mercury hair concentration corresponding to the US Environmental Protection Agency's reference dose (RfD), which is intended to protect against neurodevelopmental effects. The one-compartmentmore » model was validated against blood concentrations from three datasets in which human subjects ingested meHg in fish, either as a single meal or multiple meals. Model simulations of the single-meal scenario at different fish meHg concentrations found that concentrations of 2.0 ppm or higher can be associated with maternal hair concentrations elevated above the RfD level for days to weeks during gestation. A single-meal fish concentration cutoff of {ge} 2.0 ppm is an important consideration, especially because this single high exposure event might be in addition to a baseline meHg body burden from other types of fish consumption. This type of single-meal advisory requires that fish sampling programs provide data for individual rather than composited fish, and take into account seasonal differences that may exist in fish concentrations.« less
NASA Astrophysics Data System (ADS)
Healey, S. P.; Zhao, F. R.; McCarter, J. B.; Frescino, T.; Goeking, S.
2017-12-01
International reporting of American forest carbon trends depends upon the Forest Service's nationally consistent network of inventory plots. Plots are measured on a rolling basis over a 5- to 10-year cycle, so estimates related to any variable, including carbon storage, reflect conditions over a 5- to 10-year window. This makes it difficult to identify the carbon impact of discrete events (e.g., a bad fire year; extraction rates related to home-building trends), particularly if the events are recent.We report an approach to make inventory estimates more sensitive to discrete and recent events. We use a growth model (the Forest Vegetation Simulator - FVS) that is maintained by the Forest Service to annually update the tree list for every plot, allowing all plots to contribute to a series of single-year estimates. Satellite imagery from the Landsat platform guides the FVS simulations by providing information about which plots have been disturbed, which are recovering from disturbance, and which are undergoing undisturbed growth. The FVS model is only used to "update" plot tree lists until the next field measurement is made (maximum of 9 years). As a result, predicted changes are usually small and error rates are low. We present a pilot study of this system in Idaho, which has experienced several major fire events in the last decade. Empirical estimates of uncertainty, accounting for both plot sampling error and FVS model error, suggest that this approach greatly increases temporal specificity and sensitivity to discrete events without sacrificing much estimate precision at the level of a US state. This approach has the potential to take better advantage of the Forest Service's rolling plot measurement schedule to report carbon storage in the US, and it offers the basis of a system that might allow near-term, forward-looking analysis of the effects of hypothetical forest disturbance patterns.
Modeling the Impact of Stream Discharge Events on Riparian Solute Dynamics.
Mahmood, Muhammad Nasir; Schmidt, Christian; Fleckenstein, Jan H; Trauth, Nico
2018-03-22
The biogeochemical composition of stream water and the surrounding riparian water is mainly defined by the exchange of water and solutes between the stream and the riparian zone. Short-term fluctuations in near stream hydraulic head gradients (e.g., during stream flow events) can significantly influence the extent and rate of exchange processes. In this study, we simulate exchanges between streams and their riparian zone driven by stream stage fluctuations during single stream discharge events of varying peak height and duration. Simulated results show that strong stream flow events can trigger solute mobilization in riparian soils and subsequent export to the stream. The timing and amount of solute export is linked to the shape of the discharge event. Higher peaks and increased durations significantly enhance solute export, however, peak height is found to be the dominant control for overall mass export. Mobilized solutes are transported to the stream in two stages (1) by return flow of stream water that was stored in the riparian zone during the event and (2) by vertical movement to the groundwater under gravity drainage from the unsaturated parts of the riparian zone, which lasts for significantly longer time (> 400 days) resulting in long tailing of bank outflows and solute mass outfluxes. We conclude that strong stream discharge events can mobilize and transport solutes from near stream riparian soils into the stream. The impact of short-term stream discharge variations on solute exchange may last for long times after the flow event. © 2018, National Ground Water Association.
Assessing forest windthrow damage using single-date, post-event airborne laser scanning data
Gherardo Chirici; Francesca Bottalico; Francesca Giannetti; Barbara Del Perugia; Davide Travaglini; Susanna Nocentini; Erico Kutchartt; Enrico Marchi; Cristiano Foderi; Marco Fioravanti; Lorenzo Fattorini; Lorenzo Bottai; Ronald McRoberts; Erik Næsset; Piermaria Corona; Bernardo Gozzini
2017-01-01
One of many possible climate change effects in temperate areas is the increase of frequency and severity of windstorms; thus, fast and cost efficient new methods are needed to evaluate wind-induced damages in forests. We present a method for assessing windstorm damages in forest landscapes based on a two-stage sampling strategy using single-date, post-event airborne...
Wilson, Leigh Ann; Morgan, Geoffrey Gerard; Hanigan, Ivan Charles; Johnston, Fay H; Abu-Rayya, Hisham; Broome, Richard; Gaskin, Clive; Jalaludin, Bin
2013-11-15
This study examined the association between unusually high temperature and daily mortality (1997-2007) and hospital admissions (1997-2010) in the Sydney Greater Metropolitan Region (GMR) to assist in the development of targeted health programs designed to minimise the public health impact of extreme heat. Sydney GMR was categorized into five climate zones. Heat-events were defined as severe or extreme. Using a time-stratified case-crossover design with a conditional logistic regression model we adjusted for influenza epidemics, public holidays, and climate zone. Odds ratios (OR) and 95% confidence intervals were estimated for associations between daily mortality and hospital admissions with heat-event days compared to non-heat event days for single and three day heat-events. All-cause mortality overall had similar magnitude associations with single day and three day extreme and severe events as did all cardiovascular mortality. Respiratory mortality was associated with single day and three day severe events (95th percentile, lag0: OR = 1.14; 95%CI: 1.04 to 1.24). Diabetes mortality had similar magnitude associations with single day and three day severe events (95th percentile, lag0: OR = 1.22; 95%CI: 1.03 to 1.46) but was not associated with extreme events. Hospital admissions for heat related injuries, dehydration, and other fluid disorders were associated with single day and three day extreme and severe events. Contrary to our findings for mortality, we found inconsistent and sometimes inverse associations for extreme and severe events with cardiovascular disease and respiratory disease hospital admissions. Controlling for air pollutants did not influence the mortality associations but reduced the magnitude of the associations with hospital admissions particularly for ozone and respiratory disease. Single and three day events of unusually high temperatures in Sydney are associated with similar magnitude increases in mortality and hospital admissions. The trend towards an inverse association between cardio-vascular admissions and heat-events and the strong positive association between cardio-vascular mortality and heat-events suggests these events may lead to a rapid deterioration in persons with existing cardio-vascular disease resulting in death. To reduce the adverse effects of high temperatures over multiple days, and less extreme but more frequent temperatures over single days, targeted public health messages are critical.
2013-01-01
Background This study examined the association between unusually high temperature and daily mortality (1997–2007) and hospital admissions (1997–2010) in the Sydney Greater Metropolitan Region (GMR) to assist in the development of targeted health programs designed to minimise the public health impact of extreme heat. Methods Sydney GMR was categorized into five climate zones. Heat-events were defined as severe or extreme. Using a time-stratified case-crossover design with a conditional logistic regression model we adjusted for influenza epidemics, public holidays, and climate zone. Odds ratios (OR) and 95% confidence intervals were estimated for associations between daily mortality and hospital admissions with heat-event days compared to non-heat event days for single and three day heat-events. Results All-cause mortality overall had similar magnitude associations with single day and three day extreme and severe events as did all cardiovascular mortality. Respiratory mortality was associated with single day and three day severe events (95thpercentile, lag0: OR = 1.14; 95%CI: 1.04 to 1.24). Diabetes mortality had similar magnitude associations with single day and three day severe events (95thpercentile, lag0: OR = 1.22; 95%CI: 1.03 to 1.46) but was not associated with extreme events. Hospital admissions for heat related injuries, dehydration, and other fluid disorders were associated with single day and three day extreme and severe events. Contrary to our findings for mortality, we found inconsistent and sometimes inverse associations for extreme and severe events with cardiovascular disease and respiratory disease hospital admissions. Controlling for air pollutants did not influence the mortality associations but reduced the magnitude of the associations with hospital admissions particularly for ozone and respiratory disease. Conclusions Single and three day events of unusually high temperatures in Sydney are associated with similar magnitude increases in mortality and hospital admissions. The trend towards an inverse association between cardio-vascular admissions and heat-events and the strong positive association between cardio-vascular mortality and heat-events suggests these events may lead to a rapid deterioration in persons with existing cardio-vascular disease resulting in death. To reduce the adverse effects of high temperatures over multiple days, and less extreme but more frequent temperatures over single days, targeted public health messages are critical. PMID:24238064
A Simulation of Alternatives for Wholesale Inventory Replenishment
2016-03-01
algorithmic details. The last method is a mixed-integer, linear optimization model. Comparative Inventory Simulation, a discrete event simulation model, is...simulation; event graphs; reorder point; fill-rate; backorder; discrete event simulation; wholesale inventory optimization model 15. NUMBER OF PAGES...model. Comparative Inventory Simulation, a discrete event simulation model, is designed to find fill rates achieved for each National Item
NASA Astrophysics Data System (ADS)
Li, J.-L. F.; Suhas, E.; Richardson, Mark; Lee, Wei-Liang; Wang, Yi-Hui; Yu, Jia-Yuh; Lee, Tong; Fetzer, Eric; Stephens, Graeme; Shen, Min-Hua
2018-02-01
Most of the global climate models (GCMs) in the Coupled Model Intercomparison Project, phase 5 do not include precipitating ice (aka falling snow) in their radiation calculations. We examine the importance of the radiative effects of precipitating ice on simulated surface wind stress and sea surface temperatures (SSTs) in terms of seasonal variation and in the evolution of central Pacific El Niño (CP-El Niño) events. Using controlled simulations with the CESM1 model, we show that the exclusion of precipitating ice radiative effects generates a persistent excessive upper-level radiative cooling and an increasingly unstable atmosphere over convective regions such as the western Pacific and tropical convergence zones. The invigorated convection leads to persistent anomalous low-level outflows which weaken the easterly trade winds, reducing upper-ocean mixing and leading to a positive SST bias in the model mean state. In CP-El Niño events, this means that outflow from the modeled convection in the central Pacific reduces winds to the east, allowing unrealistic eastward propagation of warm SST anomalies following the peak in CP-El Niño activity. Including the radiative effects of precipitating ice reduces these model biases and improves the simulated life cycle of the CP-El Niño. Improved simulations of present-day tropical seasonal variations and CP-El Niño events would increase the confidence in simulating their future behavior.
Single top quark photoproduction at the LHC
NASA Astrophysics Data System (ADS)
de Favereau de Jeneret, J.; Ovyn, S.
2008-08-01
High-energy photon-proton interactions at the LHC offer interesting possibilities for the study of the electroweak sector up to TeV scale and searches for processes beyond the Standard Model. An analysis of the W associated single top photoproduction has been performed using the adapted MadGraph/MadEvent [F. Maltoni and T. Stelzer, JHEP 0302, (2003) 027; T. Stelzer and W.F. Long, Phys. Commun. 81, (1994) 357-371] and CalcHEP [A. Pukhov, Nucl. Inst. Meth A 502, (2003) 596-598] programs interfaced to the Pythia [T. Sjöstrand et al., Comput. Phys. Commun. 135, (2001) 238] generator and a fast detector simulation program. Event selection and suppression of main backgrounds have been studied. A comparable sensitivity to |V| to those obtained using the standard single top production in pp collisions has been achieved already for 10 fb of integrated luminosity. Photoproduction at the LHC provides also an attractive framework for observation of the anomalous production of single top due to Flavour-Changing Neutral Currents. The sensitivity to anomalous coupling parameters, k and k is presented and indicates that stronger limits can be placed on anomalous couplings after 1 fb.
Synchronization Of Parallel Discrete Event Simulations
NASA Technical Reports Server (NTRS)
Steinman, Jeffrey S.
1992-01-01
Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.
Webb, Nicholas J A; Wells, Thomas; Tsai, Max; Zhao, Zhen; Juhasz, Attila; Dudkowski, Caroline
2016-04-01
This open-label, multicenter, single-dose study characterized the pharmacokinetics and short-term safety of azilsartan medoxomil (AZL-M) in hypertensive pediatric subjects (12-16 years [cohort 1a; n = 9]; 6-11 years [cohort 2; n = 8]; 4-5 years [cohort 3; n = 3]). Model-based simulations were performed to guide dosing, especially in 1-5-year olds, who were difficult to enroll. AZL-M was dosed according to body weight (20-60-mg tablet, cohorts 1a and 2; 0.66 mg/kg granule suspension, cohort 3). In cohort 1, gender-matched healthy adults (cohort 1b; n = 9) received AZL-M 80 mg. Exposure to AZL (active moiety of AZL-M), measured by dose-/body weight-normalized C max and AUC0-∞, was ∼15-30 % lower in pediatric subjects versus adults. In simulations, exposure with 0.66 mg/kg AZL-M in pediatric subjects weighing 8-25 kg approximated to AZL-M 40 mg (typical starting dose) in adults. The simulations suggest that 25-50-kg subjects require half the adult dose (10-40 mg), whereas 50-100-kg subjects can use the same dosing as adults. Adverse events were mild in intensity, apart from one moderate event (migraine). This dosing strategy should be safe in pediatric patients, as AZL exposure would not exceed that seen in adults with the highest approved AZL-M dose (80 mg).
Trial latencies estimation of event-related potentials in EEG by means of genetic algorithms
NASA Astrophysics Data System (ADS)
Da Pelo, P.; De Tommaso, M.; Monaco, A.; Stramaglia, S.; Bellotti, R.; Tangaro, S.
2018-04-01
Objective. Event-related potentials (ERPs) are usually obtained by averaging thus neglecting the trial-to-trial latency variability in cognitive electroencephalography (EEG) responses. As a consequence the shape and the peak amplitude of the averaged ERP are smeared and reduced, respectively, when the single-trial latencies show a relevant variability. To date, the majority of the methodologies for single-trial latencies inference are iterative schemes providing suboptimal solutions, the most commonly used being the Woody’s algorithm. Approach. In this study, a global approach is developed by introducing a fitness function whose global maximum corresponds to the set of latencies which renders the trial signals most aligned as possible. A suitable genetic algorithm has been implemented to solve the optimization problem, characterized by new genetic operators tailored to the present problem. Main results. The results, on simulated trials, showed that the proposed algorithm performs better than Woody’s algorithm in all conditions, at the cost of an increased computational complexity (justified by the improved quality of the solution). Application of the proposed approach on real data trials, resulted in an increased correlation between latencies and reaction times w.r.t. the output from RIDE method. Significance. The above mentioned results on simulated and real data indicate that the proposed method, providing a better estimate of single-trial latencies, will open the way to more accurate study of neural responses as well as to the issue of relating the variability of latencies to the proper cognitive and behavioural correlates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Auld, Joshua; Hope, Michael; Ley, Hubert
This paper discusses the development of an agent-based modelling software development kit, and the implementation and validation of a model using it that integrates dynamic simulation of travel demand, network supply and network operations. A description is given of the core utilities in the kit: a parallel discrete event engine, interprocess exchange engine, and memory allocator, as well as a number of ancillary utilities: visualization library, database IO library, and scenario manager. The overall framework emphasizes the design goals of: generality, code agility, and high performance. This framework allows the modeling of several aspects of transportation system that are typicallymore » done with separate stand-alone software applications, in a high-performance and extensible manner. The issue of integrating such models as dynamic traffic assignment and disaggregate demand models has been a long standing issue for transportation modelers. The integrated approach shows a possible way to resolve this difficulty. The simulation model built from the POLARIS framework is a single, shared-memory process for handling all aspects of the integrated urban simulation. The resulting gains in computational efficiency and performance allow planning models to be extended to include previously separate aspects of the urban system, enhancing the utility of such models from the planning perspective. Initial tests with case studies involving traffic management center impacts on various network events such as accidents, congestion and weather events, show the potential of the system.« less
Performance of residents and anesthesiologists in a simulation-based skill assessment.
Murray, David J; Boulet, John R; Avidan, Michael; Kras, Joseph F; Henrichs, Bernadette; Woodhouse, Julie; Evers, Alex S
2007-11-01
Anesthesiologists and anesthesia residents are expected to acquire and maintain skills to manage a wide range of acute intraoperative anesthetic events. The purpose of this study was to determine whether an inventory of simulated intraoperative scenarios provided a reliable and valid measure of anesthesia residents' and anesthesiologists' skill. Twelve simulated acute intraoperative scenarios were designed to assess the performance of 64 residents and 35 anesthesiologists. The participants were divided into four groups based on their training and experience. There were 31 new CA-1, 12 advanced CA-1, and 22 CA-2/CA-3 residents as well as a group of 35 experienced anesthesiologists who participated in the assessment. Each participant managed a set of simulated events. The advanced CA-1 residents, CA-2/CA-3 residents, and 35 anesthesiologists managed 8 of 12 intraoperative simulation exercises. The 31 CA-1 residents each managed 3 intraoperative scenarios. The new CA-1 residents received lower scores on the simulated intraoperative events than the other groups of participants. The advanced CA-1 residents, CA-2/CA-3 residents, and anesthesiologists performed similarly on the overall assessment. There was a wide range of scores obtained by individuals in each group. A number of the exercises were difficult for the majority of participants to recognize and treat, but most events effectively discriminated among participants who achieved higher and lower overall scores. This simulation-based assessment provided a valid method to distinguish the skills of more experienced anesthesia residents and anesthesiologists from residents in early training. The overall score provided a reliable measure of a participant's ability to recognize and manage simulated acute intraoperative events. Additional studies are needed to determine whether these simulation-based assessments are valid measures of clinical performance.
Extreme weather: Subtropical floods and tropical cyclones
NASA Astrophysics Data System (ADS)
Shaevitz, Daniel A.
Extreme weather events have a large effect on society. As such, it is important to understand these events and to project how they may change in a future, warmer climate. The aim of this thesis is to develop a deeper understanding of two types of extreme weather events: subtropical floods and tropical cyclones (TCs). In the subtropics, the latitude is high enough that quasi-geostrophic dynamics are at least qualitatively relevant, while low enough that moisture may be abundant and convection strong. Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. In the first part of this thesis, I examine the possible triggering of convection by the large-scale dynamics and investigate the coupling between the two. Specifically two examples of extreme precipitation events in the subtropics are analyzed, the 2010 and 2014 floods of India and Pakistan and the 2015 flood of Texas and Oklahoma. I invert the quasi-geostrophic omega equation to decompose the large-scale vertical motion profile to components due to synoptic forcing and diabatic heating. Additionally, I present model results from within the Column Quasi-Geostrophic framework. A single column model and cloud-revolving model are forced with the large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation with input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. It is found that convection was triggered primarily by mechanically forced orographic ascent over the Himalayas during the India/Pakistan flood and by upper-level Potential Vorticity disturbances during the Texas/Oklahoma flood. Furthermore, a climate attribution analysis was conducted for the Texas/Oklahoma flood and it is found that anthropogenic climate change was responsible for a small amount of rainfall during the event but the intensity of this event may be greatly increased if it occurs in a future climate. In the second part of this thesis, I examine the ability of high-resolution global atmospheric models to simulate TCs. Specifically, I present an intercomparison of several models' ability to simulate the global characteristics of TCs in the current climate. This is a necessary first step before using these models to project future changes in TCs. Overall, the models were able to reproduce the geographic distribution of TCs reasonably well, with some of the models performing remarkably well. The intensity of TCs varied widely between the models, with some of this difference being due to model resolution.
NASA Astrophysics Data System (ADS)
Scolini, C.; Verbeke, C.; Gopalswamy, N.; Wijsen, N.; Poedts, S.; Mierla, M.; Rodriguez, L.; Pomoell, J.; Cramer, W. D.; Raeder, J.
2017-12-01
Coronal Mass Ejections (CMEs) and their interplanetary counterparts are considered to be the major space weather drivers. An accurate modelling of their onset and propagation up to 1 AU represents a key issue for more reliable space weather forecasts, and predictions about their actual geo-effectiveness can only be performed by coupling global heliospheric models to 3D models describing the terrestrial environment, e.g. magnetospheric and ionospheric codes in the first place. In this work we perform a Sun-to-Earth comprehensive analysis of the July 12, 2012 CME with the aim of testing the space weather predictive capabilities of the newly developed EUHFORIA heliospheric model integrated with the Gibson-Low (GL) flux rope model. In order to achieve this goal, we make use of a model chain approach by using EUHFORIA outputs at Earth as input parameters for the OpenGGCM magnetospheric model. We first reconstruct the CME kinematic parameters by means of single- and multi- spacecraft reconstruction methods based on coronagraphic and heliospheric CME observations. The magnetic field-related parameters of the flux rope are estimated based on imaging observations of the photospheric and low coronal source regions of the eruption. We then simulate the event with EUHFORIA, testing the effect of the different CME kinematic input parameters on simulation results at L1. We compare simulation outputs with in-situ measurements of the Interplanetary CME and we use them as input for the OpenGGCM model, so to investigate the magnetospheric response to solar perturbations. From simulation outputs we extract some global geomagnetic activity indexes and compare them with actual data records and with results obtained by the use of empirical relations. Finally, we discuss the forecasting capabilities of such kind of approach and its future improvements.
Regente, J; de Zeeuw, J; Bes, F; Nowozin, C; Appelhoff, S; Wahnschaffe, A; Münch, M; Kunz, D
2017-05-01
In single night shifts, extending habitual wake episodes leads to sleep deprivation induced decrements of performance during the shift and re-adaptation effects the next day. We investigated whether short-wavelength depleted (=filtered) bright light (FBL) during a simulated night shift would counteract such effects. Twenty-four participants underwent a simulated night shift in dim light (DL) and in FBL. Reaction times, subjective sleepiness and salivary melatonin concentrations were assessed during both nights. Daytime sleep was recorded after both simulated night shifts. During FBL, we found no melatonin suppression compared to DL, but slightly faster reaction times in the second half of the night. Daytime sleep was not statistically different between both lighting conditions (n = 24) and there was no significant phase shift after FBL (n = 11). To conclude, our results showed positive effects from FBL during simulated single night shifts which need to be further tested with larger groups, in more applied studies and compared to standard lighting. Copyright © 2016 Elsevier Ltd. All rights reserved.
Effective precipitation duration for runoff peaks based on catchment modelling
NASA Astrophysics Data System (ADS)
Sikorska, A. E.; Viviroli, D.; Seibert, J.
2018-01-01
Despite precipitation intensities may greatly vary during one flood event, detailed information about these intensities may not be required to accurately simulate floods with a hydrological model which rather reacts to cumulative precipitation sums. This raises two questions: to which extent is it important to preserve sub-daily precipitation intensities and how long does it effectively rain from the hydrological point of view? Both questions might seem straightforward to answer with a direct analysis of past precipitation events but require some arbitrary choices regarding the length of a precipitation event. To avoid these arbitrary decisions, here we present an alternative approach to characterize the effective length of precipitation event which is based on runoff simulations with respect to large floods. More precisely, we quantify the fraction of a day over which the daily precipitation has to be distributed to faithfully reproduce the large annual and seasonal floods which were generated by the hourly precipitation rate time series. New precipitation time series were generated by first aggregating the hourly observed data into daily totals and then evenly distributing them over sub-daily periods (n hours). These simulated time series were used as input to a hydrological bucket-type model and the resulting runoff flood peaks were compared to those obtained when using the original precipitation time series. We define then the effective daily precipitation duration as the number of hours n, for which the largest peaks are simulated best. For nine mesoscale Swiss catchments this effective daily precipitation duration was about half a day, which indicates that detailed information on precipitation intensities is not necessarily required to accurately estimate peaks of the largest annual and seasonal floods. These findings support the use of simple disaggregation approaches to make usage of past daily precipitation observations or daily precipitation simulations (e.g. from climate models) for hydrological modeling at an hourly time step.
Shao, Ning; Jiang, Shi-Meng; Zhang, Miao; Wang, Jing; Guo, Shu-Juan; Li, Yang; Jiang, He-Wei; Liu, Cheng-Xi; Zhang, Da-Bing; Yang, Li-Tao; Tao, Sheng-Ce
2014-01-21
The monitoring of genetically modified organisms (GMOs) is a primary step of GMO regulation. However, there is presently a lack of effective and high-throughput methodologies for specifically and sensitively monitoring most of the commercialized GMOs. Herein, we developed a multiplex amplification on a chip with readout on an oligo microarray (MACRO) system specifically for convenient GMO monitoring. This system is composed of a microchip for multiplex amplification and an oligo microarray for the readout of multiple amplicons, containing a total of 91 targets (18 universal elements, 20 exogenous genes, 45 events, and 8 endogenous reference genes) that covers 97.1% of all GM events that have been commercialized up to 2012. We demonstrate that the specificity of MACRO is ~100%, with a limit of detection (LOD) that is suitable for real-world applications. Moreover, the results obtained of simulated complex samples and blind samples with MACRO were 100% consistent with expectations and the results of independently performed real-time PCRs, respectively. Thus, we believe MACRO is the first system that can be applied for effectively monitoring the majority of the commercialized GMOs in a single test.
NASA Astrophysics Data System (ADS)
Postance, Benjamin; Hillier, John; Dijkstra, Tom; Dixon, Neil
2016-04-01
The failure of engineered or natural slopes which support or are adjacent to transportation systems often inflicts costly direct physical damage and indirect system disruption. The consequences and severity of indirect impacts vary according to which links, nodes or network facilities are physically disrupted. Moreover, it is often the case that multiple slope failure disruptions are triggered simultaneously following prolonged or intense precipitation events due to a degree of local homogeneity of slope characteristics and materials. This study investigates the application of national commuter statistics and network agent simulation to evaluate indirect impacts of landslide events disrupting the Scottish trunk road transportation network (UK). Previous studies often employ shortest pathway analysis whereas agent simulation has received relatively little attention. British Geological Survey GeoSure landslide susceptibility data is used to select 35 susceptible trunk road segments by means of neighbouring total area at risk. For each of the candidate 35 segments the network and zonal variation in travel time is calculated for a single day of disruption, economic impact is approximated using established governmental and industry transport planning and appraisal values. The results highlight that a number of trunk road segments incur indirect economic losses in the order of tens of thousands of pounds for each day of closure. Calculated losses at the A83 Rest and Be Thankful are 50% greater than previous estimates at £75 thousand per day of closure. Also highlighted are events in which economic impact is relatively minor, yet concentrating on particular communities that can become substantially isolated as a consequence of a single event. The findings of this study are of interest and support wider investigations exploring cost considerations for decision makers and mitigation strategies, in addition to identifying network topological and demand indicators conducive to high indirect economic cost events.
NASA Astrophysics Data System (ADS)
Torfs, P.; Brauer, C.; Teuling, R.; Kloosterman, P.; Willems, G.; Verkooijen, B.; Uijlenhoet, R.
2012-12-01
On 26 August 2010 the 6.5 km2 Hupsel Brook catchment in The Netherlands, which has been the experimental watershed employed by Wageningen University since the 1960s, was struck by an exceptionally heavy rainfall event (return period > 1000 years). We investigated the unprecedented flash flood triggered by this event and this study improved our understanding of the dynamics of such lowland flash floods (Brauer et al., 2011). During this extreme event some thresholds became apparent that do not play a role during average conditions and are not incorporated in most rainfall-runoff models. This may lead to errors when these models are used to forecast runoff responses to rainfall events that are extreme today, but likely to become less extreme when climate changes. The aim of this research project was to find out to what extent different types of rainfall-runoff models are able to simulate this extreme event, and, if not, which processes, thresholds or parameters are lacking to describe the event accurately. Five of the 7 employed models treat the catchment as a lumped system. This group includes the well-known HBV and Sacramento models. The Wageningen Model, which has been developed in our group, has a structure similar to HBV and the Sacramento Model. The SWAP (Soil, Water, Atmosphere, Plant) Model represents a physically-based model of a single soil column, but has been used here as a representation for the whole catchment. The LGSI (Lowland Groundwater Surface water Interaction) Model uses probability distributions to account for spatial variability in groundwater depth and resulting flow routes in the catchment. We did not only analyze how accurately each model simulated the discharge, but also whether groundwater and soil moisture dynamics and resulting flow processes were captured adequately. The 6th model is a spatially distributed model called SIMGRO. It is based on a MODFLOW groundwater model, extended with an unsaturated zone based on the previously mentioned SWAP model and a surface water network. This model has a very detailed groundwater-surface water interface and should therefore be particularly suitable to study the effect of backwater feedbacks we observed during the flood. In addition, the effect of spatially varying soil characteristics on the runoff response has been studied. The final model is SOBEK, which was originally developed as a hydraulic model consisting of a surface water network with nodes and links. To some of the nodes, upstream areas with associated rainfall-runoff models have been assigned. This model is especially useful to study the effect of hydraulic structures, such as culverts, and stream bed vegetation on dampening the flood peak. Brauer, C. C., Teuling, A.J., Overeem, A., van der Velde, Y., Hazenberg, P., Warmerdam, P. M. M. and Uijlenhoet, R.: Anatomy of extraordinary rainfall and flash flood in a Dutch lowland catchment, Hydrol. Earth Syst. Sci., 15, 1991-2005, 2011.
ERIC Educational Resources Information Center
Lin, Li-Fen; Hsu, Ying-Shao; Yeh, Yi-Fen
2012-01-01
Several researchers have investigated the effects of computer simulations on students' learning. However, few have focused on how simulations with authentic contexts influences students' inquiry skills. Therefore, for the purposes of this study, we developed a computer simulation (FossilSim) embedded in an authentic inquiry lesson. FossilSim…
Simulating southwestern U.S. desert dust influences on supercell thunderstorms
NASA Astrophysics Data System (ADS)
Lerach, David G.; Cotton, William R.
2018-05-01
Three-dimensional numerical simulations were performed to evaluate potential southwestern U.S. dust indirect microphysical and direct radiative impacts on a real severe storms outbreak. Increased solar absorption within the dust plume led to modest increases in pre-storm atmospheric stability at low levels, resulting in weaker convective updrafts and less widespread precipitation. Dust microphysical impacts on convection were minor in comparison, due in part to the lofted dust concentrations being relatively few in number when compared to the background (non-dust) aerosol population. While dust preferentially serving as cloud condensation nuclei (CCN) versus giant CCN had opposing effects on warm rain production, both scenarios resulted in ample supercooled water and subsequent glaciation aloft, yielding larger graupel and hail. Associated latent heating from condensation and freezing contributed little to overall updraft invigoration. With reduced rain production overall, the simulations that included dust effects experienced slightly reduced grid-cumulative precipitation and notably warmer and spatially smaller cold pools. Dust serving as ice nucleating particles did not appear to play a significant role. The presence of dust ultimately reduced the number of supercells produced but allowed for supercell evolution characterized by consistently higher values of relative vertical vorticity within simulated mesocyclones. Dust radiative and microphysical effects were relatively small in magnitude when compared to those from altering the background convective available potential energy and vertical wind shear. It is difficult to generalize such findings from a single event, however, due to a number of case-specific environmental factors. These include the nature of the low-level moisture advection and characteristics of the background aerosol distribution.
NASA Astrophysics Data System (ADS)
Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Ambrogi, F.; Asilar, E.; Bergauer, T.; Brandstetter, J.; Brondolin, E.; Dragicevic, M.; Erö, J.; Escalante Del Valle, A.; Flechl, M.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Grossmann, J.; Hrubec, J.; Jeitler, M.; König, A.; Krammer, N.; Krätschmer, I.; Liko, D.; Madlener, T.; Mikulec, I.; Pree, E.; Rad, N.; Rohringer, H.; Schieck, J.; Schöfbeck, R.; Spanring, M.; Spitzbart, D.; Taurok, A.; Waltenberger, W.; Wittmann, J.; Wulz, C.-E.; Zarucki, M.; Chekhovsky, V.; Mossolov, V.; Suarez Gonzalez, J.; De Wolf, E. A.; Di Croce, D.; Janssen, X.; Lauwers, J.; Pieters, M.; Van De Klundert, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Abu Zeid, S.; Blekman, F.; D'Hondt, J.; De Bruyn, I.; De Clercq, J.; Deroover, K.; Flouris, G.; Lontkovskyi, D.; Lowette, S.; Marchesini, I.; Moortgat, S.; Moreels, L.; Python, Q.; Skovpen, K.; Tavernier, S.; Van Doninck, W.; Van Mulders, P.; Van Parijs, I.; Beghin, D.; Bilin, B.; Brun, H.; Clerbaux, B.; De Lentdecker, G.; Delannoy, H.; Dorney, B.; Fasanella, G.; Favart, L.; Goldouzian, R.; Grebenyuk, A.; Kalsi, A. K.; Lenzi, T.; Luetic, J.; Maerschalk, T.; Seva, T.; Starling, E.; Vander Velde, C.; Vanlaer, P.; Vannerom, D.; Yonamine, R.; Zenoni, F.; Cornelis, T.; Dobur, D.; Fagot, A.; Gul, M.; Khvastunov, I.; Poyraz, D.; Roskas, C.; Trocino, D.; Tytgat, M.; Verbeke, W.; Vit, M.; Zaganidis, N.; Bakhshiansohi, H.; Bondu, O.; Brochet, S.; Bruno, G.; Caputo, C.; Caudron, A.; David, P.; De Visscher, S.; Delaere, C.; Delcourt, M.; Francois, B.; Giammanco, A.; Krintiras, G.; Lemaitre, V.; Magitteri, A.; Mertens, A.; Musich, M.; Piotrzkowski, K.; Quertenmont, L.; Saggio, A.; Vidal Marono, M.; Wertz, S.; Zobec, J.; Aldá Júnior, W. L.; Alves, F. L.; Alves, G. A.; Brito, L.; Correia Silva, G.; Hensel, C.; Moraes, A.; Pol, M. E.; Rebello Teles, P.; Belchior Batista Das Chagas, E.; Carvalho, W.; Chinellato, J.; Coelho, E.; Da Costa, E. M.; Da Silveira, G. G.; De Jesus Damiao, D.; Fonseca De Souza, S.; Huertas Guativa, L. M.; Malbouisson, H.; Melo De Almeida, M.; Mora Herrera, C.; Mundim, L.; Nogima, H.; Sanchez Rosas, L. J.; Santoro, A.; Sznajder, A.; Thiel, M.; Tonelli Manganote, E. J.; Torres Da Silva De Araujo, F.; Vilela Pereira, A.; Ahuja, S.; Bernardes, C. A.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Mercadante, P. G.; Novaes, S. F.; Padula, Sandra S.; Romero Abad, D.; Ruiz Vargas, J. C.; Aleksandrov, A.; Hadjiiska, R.; Iaydjiev, P.; Marinov, A.; Misheva, M.; Rodozov, M.; Shopova, M.; Sultanov, G.; Dimitrov, A.; Litov, L.; Pavlov, B.; Petkov, P.; Fang, W.; Gao, X.; Yuan, L.; Ahmad, M.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Chen, M.; Chen, Y.; Jiang, C. H.; Leggat, D.; Liao, H.; Liu, Z.; Romeo, F.; Shaheen, S. M.; Spiezia, A.; Tao, J.; Wang, C.; Wang, Z.; Yazgan, E.; Zhang, H.; Zhao, J.; Ban, Y.; Chen, G.; Li, J.; Li, Q.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Xu, Z.; Wang, Y.; Avila, C.; Cabrera, A.; Carrillo Montoya, C. A.; Chaparro Sierra, L. F.; Florez, C.; González Hernández, C. F.; Ruiz Alvarez, J. D.; Segura Delgado, M. A.; Courbon, B.; Godinovic, N.; Lelas, D.; Puljak, I.; Ribeiro Cipriano, P. M.; Sculac, T.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Ferencek, D.; Kadija, K.; Mesic, B.; Starodumov, A.; Susa, T.; Ather, M. W.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Rykaczewski, H.; Finger, M.; Finger, M.; Carrera Jarrin, E.; Abdalla, H.; Abdelalim, A. A.; Khalil, S.; Bhowmik, S.; Dewanjee, R. K.; Kadastik, M.; Perrini, L.; Raidal, M.; Veelken, C.; Eerola, P.; Kirschenmann, H.; Pekkanen, J.; Voutilainen, M.; Havukainen, J.; Heikkilä, J. K.; Järvinen, T.; Karimäki, V.; Kinnunen, R.; Lampén, T.; Lassila-Perini, K.; Laurila, S.; Lehti, S.; Lindén, T.; Luukka, P.; Mäenpää, T.; Siikonen, H.; Tuominen, E.; Tuominiemi, J.; Tuuva, T.; Besancon, M.; Couderc, F.; Dejardin, M.; Denegri, D.; Faure, J. L.; Ferri, F.; Ganjour, S.; Ghosh, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Leloup, C.; Locci, E.; Machet, M.; Malcles, J.; Negro, G.; Rander, J.; Rosowsky, A.; Sahin, M. Ö.; Titov, M.; Abdulsalam, A.; Amendola, C.; Antropov, I.; Baffioni, S.; Beaudette, F.; Busson, P.; Cadamuro, L.; Charlot, C.; Granier de Cassagnac, R.; Jo, M.; Kucher, I.; Lisniak, S.; Lobanov, A.; Martin Blanco, J.; Nguyen, M.; Ochando, C.; Ortona, G.; Paganini, P.; Pigard, P.; Salerno, R.; Sauvan, J. B.; Sirois, Y.; Stahl Leiton, A. G.; Yilmaz, Y.; Zabi, A.; Zghiche, A.; Agram, J.-L.; Andrea, J.; Bloch, D.; Brom, J.-M.; Buttignol, M.; Chabert, E. C.; Collard, C.; Conte, E.; Coubez, X.; Drouhin, F.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Jansová, M.; Juillot, P.; Le Bihan, A.-C.; Tonon, N.; Van Hove, P.; Gadrat, S.; Beauceron, S.; Bernet, C.; Boudoul, G.; Chanon, N.; Chierici, R.; Contardo, D.; Depasse, P.; El Mamouni, H.; Fay, J.; Finco, L.; Gascon, S.; Gouzevitch, M.; Grenier, G.; Ille, B.; Lagarde, F.; Laktineh, I. B.; Lattaud, H.; Lethuillier, M.; Mirabito, L.; Pequegnot, A. L.; Perries, S.; Popov, A.; Sordini, V.; Vander Donckt, M.; Viret, S.; Zhang, S.; Khvedelidze, A.; Lomidze, D.; Autermann, C.; Feld, L.; Kiesel, M. K.; Klein, K.; Lipinski, M.; Preuten, M.; Schomakers, C.; Schulz, J.; Teroerde, M.; Wittmer, B.; Zhukov, V.; Albert, A.; Duchardt, D.; Endres, M.; Erdmann, M.; Erdweg, S.; Esch, T.; Fischer, R.; Güth, A.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Knutzen, S.; Merschmeyer, M.; Meyer, A.; Millet, P.; Mukherjee, S.; Pook, T.; Radziej, M.; Reithler, H.; Rieger, M.; Scheuch, F.; Teyssier, D.; Thüer, S.; Flügge, G.; Kargoll, B.; Kress, T.; Künsken, A.; Müller, T.; Nehrkorn, A.; Nowack, A.; Pistone, C.; Pooth, O.; Stahl, A.; Aldaya Martin, M.; Arndt, T.; Asawatangtrakuldee, C.; Beernaert, K.; Behnke, O.; Behrens, U.; Martínez, A. Bermúdez; Bin Anuar, A. A.; Borras, K.; Botta, V.; Campbell, A.; Connor, P.; Contreras-Campana, C.; Costanza, F.; De Wit, A.; Diez Pardos, C.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Eren, E.; Gallo, E.; Garay Garcia, J.; Geiser, A.; Grados Luyando, J. M.; Grohsjean, A.; Gunnellini, P.; Guthoff, M.; Harb, A.; Hauk, J.; Hempel, M.; Jung, H.; Kasemann, M.; Keaveney, J.; Kleinwort, C.; Korol, I.; Krücker, D.; Lange, W.; Lelek, A.; Lenz, T.; Lipka, K.; Lohmann, W.; Mankel, R.; Melzer-Pellmann, I.-A.; Meyer, A. B.; Meyer, M.; Missiroli, M.; Mittag, G.; Mnich, J.; Mussgiller, A.; Pitzl, D.; Raspereza, A.; Savitskyi, M.; Saxena, P.; Shevchenko, R.; Stefaniuk, N.; Tholen, H.; Van Onsem, G. P.; Walsh, R.; Wen, Y.; Wichmann, K.; Wissing, C.; Zenaiev, O.; Aggleton, R.; Bein, S.; Blobel, V.; Centis Vignali, M.; Dreyer, T.; Garutti, E.; Gonzalez, D.; Haller, J.; Hinzmann, A.; Hoffmann, M.; Karavdina, A.; Kasieczka, G.; Klanner, R.; Kogler, R.; Kovalchuk, N.; Kurz, S.; Marconi, D.; Multhaup, J.; Niedziela, M.; Nowatschin, D.; Peiffer, T.; Perieanu, A.; Reimers, A.; Scharf, C.; Schleper, P.; Schmidt, A.; Schumann, S.; Schwandt, J.; Sonneveld, J.; Stadie, H.; Steinbrück, G.; Stober, F. M.; Stöver, M.; Troendle, D.; Usai, E.; Vanhoefer, A.; Vormwald, B.; Akbiyik, M.; Barth, C.; Baselga, M.; Baur, S.; Butz, E.; Caspart, R.; Chwalek, T.; Colombo, F.; De Boer, W.; Dierlamm, A.; Faltermann, N.; Freund, B.; Friese, R.; Giffels, M.; Harrendorf, M. A.; Hartmann, F.; Heindl, S. M.; Husemann, U.; Kassel, F.; Kudella, S.; Mildner, H.; Mozer, M. U.; Müller, Th.; Plagge, M.; Quast, G.; Rabbertz, K.; Schröder, M.; Shvetsov, I.; Sieber, G.; Simonis, H. J.; Ulrich, R.; Wayand, S.; Weber, M.; Weiler, T.; Williamson, S.; Wöhrmann, C.; Wolf, R.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Kyriakis, A.; Loukas, D.; Topsis-Giotis, I.; Karathanasis, G.; Kesisoglou, S.; Panagiotou, A.; Saoulidou, N.; Tziaferi, E.; Kousouris, K.; Papakrivopoulos, I.; Evangelou, I.; Foudas, C.; Gianneios, P.; Katsoulis, P.; Kokkas, P.; Mallios, S.; Manthos, N.; Papadopoulos, I.; Paradas, E.; Strologas, J.; Triantis, F. A.; Tsitsonis, D.; Csanad, M.; Filipovic, N.; Pasztor, G.; Surányi, O.; Veres, G. I.; Bencze, G.; Hajdu, C.; Horvath, D.; Hunyadi, Á.; Sikler, F.; Veszpremi, V.; Vesztergombi, G.; Vámi, T. Á.; Beni, N.; Czellar, S.; Karancsi, J.; Makovec, A.; Molnar, J.; Szillasi, Z.; Bartók, M.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Choudhury, S.; Komaragiri, J. R.; Bahinipati, S.; Mal, P.; Mandal, K.; Nayak, A.; Sahoo, D. K.; Sahoo, N.; Swain, S. K.; Bansal, S.; Beri, S. B.; Bhatnagar, V.; Chawla, R.; Dhingra, N.; Gupta, R.; Kaur, A.; Kaur, M.; Kaur, S.; Kumar, R.; Kumari, P.; Mehta, A.; Sharma, S.; Singh, J. B.; Walia, G.; Kumar, Ashok; Shah, Aashaq; Bhardwaj, A.; Chauhan, S.; Choudhary, B. C.; Garg, R. B.; Keshri, S.; Kumar, A.; Malhotra, S.; Naimuddin, M.; Ranjan, K.; Sharma, R.; Bhardwaj, R.; Bhattacharya, R.; Bhattacharya, S.; Bhawandeep, U.; Bhowmik, D.; Dey, S.; Dutt, S.; Dutta, S.; Ghosh, S.; Majumdar, N.; Modak, A.; Mondal, K.; Mukhopadhyay, S.; Nandan, S.; Purohit, A.; Rout, P. K.; Roy, A.; Roy Chowdhury, S.; Sarkar, S.; Sharan, M.; Singh, B.; Thakur, S.; Behera, P. K.; Chudasama, R.; Dutta, D.; Jha, V.; Kumar, V.; Mohanty, A. K.; Netrakanti, P. K.; Pant, L. M.; Shukla, P.; Topkar, A.; Aziz, T.; Dugad, S.; Mahakud, B.; Mitra, S.; Mohanty, G. B.; Sur, N.; Sutar, B.; Banerjee, S.; Bhattacharya, S.; Chatterjee, S.; Das, P.; Guchait, M.; Jain, Sa.; Kumar, S.; Maity, M.; Majumder, G.; Mazumdar, K.; Sarkar, T.; Wickramage, N.; Chauhan, S.; Dube, S.; Hegde, V.; Kapoor, A.; Kothekar, K.; Pandey, S.; Rane, A.; Sharma, S.; Chenarani, S.; Eskandari Tadavani, E.; Etesami, S. M.; Khakzad, M.; Mohammadi Najafabadi, M.; Naseri, M.; Paktinat Mehdiabadi, S.; Rezaei Hosseinabadi, F.; Safarzadeh, B.; Zeinali, M.; Felcini, M.; Grunewald, M.; Abbrescia, M.; Calabria, C.; Colaleo, A.; Creanza, D.; Cristella, L.; De Filippis, N.; De Palma, M.; Di Florio, A.; Errico, F.; Fiore, L.; Iaselli, G.; Lezki, S.; Maggi, G.; Maggi, M.; Marangelli, B.; Miniello, G.; My, S.; Nuzzo, S.; Pompili, A.; Pugliese, G.; Radogna, R.; Ranieri, A.; Selvaggi, G.; Sharma, A.; Silvestris, L.; Venditti, R.; Verwilligen, P.; Zito, G.; Abbiendi, G.; Battilana, C.; Bonacorsi, D.; Borgonovi, L.; Braibant-Giacomelli, S.; Brigliadori, L.; Campanini, R.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Chhibra, S. S.; Codispoti, G.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Iemmi, F.; Marcellini, S.; Masetti, G.; Montanari, A.; Navarria, F. L.; Perrotta, A.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Albergo, S.; Costa, S.; Di Mattia, A.; Giordano, F.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Chatterjee, K.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Latino, G.; Lenzi, P.; Meschini, M.; Paoletti, S.; Russo, L.; Sguazzoni, G.; Strom, D.; Viliani, L.; Benussi, L.; Bianco, S.; Fabbri, F.; Piccolo, D.; Primavera, F.; Calvelli, V.; Ferro, F.; Ravera, F.; Robutti, E.; Tosi, S.; Benaglia, A.; Beschi, A.; Brianza, L.; Brivio, F.; Ciriolo, V.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Ghezzi, A.; Govoni, P.; Malberti, M.; Malvezzi, S.; Manzoni, R. A.; Menasce, D.; Moroni, L.; Paganoni, M.; Pauwels, K.; Pedrini, D.; Pigazzini, S.; Ragazzi, S.; Tabarelli de Fatis, T.; Buontempo, S.; Cavallo, N.; Di Guida, S.; Fabozzi, F.; Fienga, F.; Iorio, A. O. M.; Khan, W. A.; Lista, L.; Meola, S.; Paolucci, P.; Sciacca, C.; Thyssen, F.; Azzi, P.; Bacchetta, N.; Benato, L.; Biasotto, M.; Bisello, D.; Boletti, A.; Carlin, R.; Carvalho Antunes De Oliveira, A.; Checchia, P.; Dall'Osso, M.; De Castro Manzano, P.; Dorigo, T.; Dosselli, U.; Gasparini, F.; Gozzelino, A.; Lacaprara, S.; Lujan, P.; Margoni, M.; Meneguzzo, A. T.; Pozzobon, N.; Ronchese, P.; Rossin, R.; Simonetto, F.; Tiko, A.; Torassa, E.; Zanetti, M.; Zotto, P.; Braghieri, A.; Magnani, A.; Montagna, P.; Ratti, S. P.; Re, V.; Ressegotti, M.; Riccardi, C.; Salvini, P.; Vai, I.; Vitulo, P.; Alunni Solestizi, L.; Biasini, M.; Bilei, G. M.; Cecchi, C.; Ciangottini, D.; Fanò, L.; Lariccia, P.; Leonardi, R.; Manoni, E.; Mantovani, G.; Mariani, V.; Menichelli, M.; Rossi, A.; Santocchia, A.; Spiga, D.; Androsov, K.; Azzurri, P.; Bagliesi, G.; Bianchini, L.; Boccali, T.; Borrello, L.; Castaldi, R.; Ciocci, M. A.; Dell'Orso, R.; Fedi, G.; Giannini, L.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Manca, E.; Mandorli, G.; Messineo, A.; Palla, F.; Rizzi, A.; Spagnolo, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Barone, L.; Cavallari, F.; Cipriani, M.; Daci, N.; Del Re, D.; Di Marco, E.; Diemoz, M.; Gelli, S.; Longo, E.; Margaroli, F.; Marzocchi, B.; Meridiani, P.; Organtini, G.; Paramatti, R.; Preiato, F.; Rahatlou, S.; Rovelli, C.; Santanastasio, F.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Bartosik, N.; Bellan, R.; Biino, C.; Cartiglia, N.; Castello, R.; Cenna, F.; Costa, M.; Covarelli, R.; Degano, A.; Demaria, N.; Kiani, B.; Mariotti, C.; Maselli, S.; Migliore, E.; Monaco, V.; Monteil, E.; Monteno, M.; Obertino, M. M.; Pacher, L.; Pastrone, N.; Pelliccioni, M.; Pinna Angioni, G. L.; Romero, A.; Ruspa, M.; Sacchi, R.; Shchelina, K.; Sola, V.; Solano, A.; Staiano, A.; Traczyk, P.; Belforte, S.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; Zanetti, A.; Kim, D. H.; Kim, G. N.; Kim, M. S.; Lee, J.; Lee, S.; Lee, S. W.; Moon, C. S.; Oh, Y. D.; Sekmen, S.; Son, D. C.; Yang, Y. C.; Kim, H.; Moon, D. H.; Oh, G.; Brochero Cifuentes, J. A.; Goh, J.; Kim, T. J.; Cho, S.; Choi, S.; Go, Y.; Gyun, D.; Ha, S.; Hong, B.; Jo, Y.; Kim, Y.; Lee, K.; Lee, K. S.; Lee, S.; Lim, J.; Park, S. K.; Roh, Y.; Almond, J.; Kim, J.; Kim, J. S.; Lee, H.; Lee, K.; Nam, K.; Oh, S. B.; Radburn-Smith, B. C.; Seo, S. h.; Yang, U. K.; Yoo, H. D.; Yu, G. B.; Kim, H.; Kim, J. H.; Lee, J. S. H.; Park, I. C.; Choi, Y.; Hwang, C.; Lee, J.; Yu, I.; Dudenas, V.; Juodagalvis, A.; Vaitkus, J.; Ahmed, I.; Ibrahim, Z. A.; Md Ali, M. A. B.; Mohamad Idris, F.; Wan Abdullah, W. A. T.; Yusli, M. N.; Zolkapli, Z.; Reyes-Almanza, R.; Ramirez-Sanchez, G.; Duran-Osuna, M. C.; C., M.; Castilla-Valdez, H.; De La Cruz-Burelo, E.; Heredia-De La Cruz, I.; Rabadan-Trejo, R. I.; Lopez-Fernandez, R.; Mejia Guisao, J.; Sanchez-Hernandez, A.; Carrillo Moreno, S.; Oropeza Barrera, C.; Vazquez Valencia, F.; Eysermans, J.; Pedraza, I.; Salazar Ibarguen, H. A.; Uribe Estrada, C.; Morelos Pineda, A.; Krofcheck, D.; Bheesette, S.; Butler, P. H.; Ahmad, A.; Ahmad, M.; Hassan, Q.; Hoorani, H. R.; Saddique, A.; Shah, M. A.; Shoaib, M.; Waqas, M.; Bialkowska, H.; Bluj, M.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Szleper, M.; Zalewski, P.; Bunkowski, K.; Byszuk, A.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Olszewski, M.; Pyskir, A.; Walczak, M.; Bargassa, P.; Beirão Da Cruz E Silva, C.; Di Francesco, A.; Faccioli, P.; Galinhas, B.; Gallinaro, M.; Hollar, J.; Leonardo, N.; Lloret Iglesias, L.; Nemallapudi, M. V.; Seixas, J.; Strong, G.; Toldaiev, O.; Vadruccio, D.; Varela, J.; Afanasiev, S.; Bunin, P.; Gavrilenko, M.; Golutvin, I.; Gorbunov, I.; Kamenev, A.; Karjavin, V.; Lanev, A.; Malakhov, A.; Matveev, V.; Moisenz, P.; Palichik, V.; Perelygin, V.; Shmatov, S.; Shulha, S.; Skatchkov, N.; Smirnov, V.; Voytishin, N.; Zarubin, A.; Ivanov, Y.; Kim, V.; Kuznetsova, E.; Levchenko, P.; Murzin, V.; Oreshkin, V.; Smirnov, I.; Sosnov, D.; Sulimov, V.; Uvarov, L.; Vavilov, S.; Vorobyev, A.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Karneyeu, A.; Kirsanov, M.; Krasnikov, N.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Pozdnyakov, I.; Safronov, G.; Spiridonov, A.; Stepennov, A.; Stolin, V.; Toms, M.; Vlasov, E.; Zhokin, A.; Aushev, T.; Bylinkin, A.; Chistov, R.; Danilov, M.; Parygin, P.; Philippov, D.; Polikarpov, S.; Tarkovskii, E.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Rusakov, S. V.; Terkulov, A.; Baskakov, A.; Belyaev, A.; Boos, E.; Bunichev, V.; Dubinin, M.; Dudko, L.; Gribushin, A.; Klyukhin, V.; Korneeva, N.; Lokhtin, I.; Miagkov, I.; Obraztsov, S.; Perfilov, M.; Savrin, V.; Volkov, P.; Blinov, V.; Shtol, D.; Skovpen, Y.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Elumakhov, D.; Godizov, A.; Kachanov, V.; Kalinin, A.; Konstantinov, D.; Mandrik, P.; Petrov, V.; Ryutin, R.; Sobol, A.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Babaev, A.; Adzic, P.; Cirkovic, P.; Devetak, D.; Dordevic, M.; Milosevic, J.; Alcaraz Maestre, J.; Bachiller, I.; Barrio Luna, M.; Cerrada, M.; Colino, N.; De La Cruz, B.; Delgado Peris, A.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Flix, J.; Fouz, M. C.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Moran, D.; Pérez-Calero Yzquierdo, A.; Puerta Pelayo, J.; Redondo, I.; Romero, L.; Soares, M. S.; Triossi, A.; Álvarez Fernández, A.; Albajar, C.; de Trocóniz, J. F.; Cuevas, J.; Erice, C.; Fernandez Menendez, J.; Folgueras, S.; Gonzalez Caballero, I.; González Fernández, J. R.; Palencia Cortezon, E.; Sanchez Cruz, S.; Vischia, P.; Vizan Garcia, J. M.; Cabrillo, I. J.; Calderon, A.; Chazin Quero, B.; Duarte Campderros, J.; Fernandez, M.; Fernández Manteca, P. J.; Garcia-Ferrero, J.; García Alonso, A.; Gomez, G.; Lopez Virto, A.; Marco, J.; Martinez Rivero, C.; Martinez Ruiz del Arbol, P.; Matorras, F.; Piedra Gomez, J.; Prieels, C.; Rodrigo, T.; Ruiz-Jimeno, A.; Scodellaro, L.; Trevisani, N.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Akgun, B.; Auffray, E.; Baillon, P.; Ball, A. H.; Barney, D.; Bendavid, J.; Bianco, M.; Bocci, A.; Botta, C.; Camporesi, T.; Cepeda, M.; Cerminara, G.; Chapon, E.; Chen, Y.; d'Enterria, D.; Dabrowski, A.; Daponte, V.; David, A.; De Gruttola, M.; De Roeck, A.; Deelen, N.; Dobson, M.; du Pree, T.; Dünser, M.; Dupont, N.; Elliott-Peisert, A.; Everaerts, P.; Fallavollita, F.; Franzoni, G.; Fulcher, J.; Funk, W.; Gigi, D.; Gilbert, A.; Gill, K.; Glege, F.; Gulhan, D.; Hegeman, J.; Innocente, V.; Jafari, A.; Janot, P.; Karacheban, O.; Kieseler, J.; Knünz, V.; Kornmayer, A.; Kortelainen, M. J.; Krammer, M.; Lange, C.; Lecoq, P.; Lourenço, C.; Lucchini, M. T.; Malgeri, L.; Mannelli, M.; Martelli, A.; Meijers, F.; Merlin, J. A.; Mersi, S.; Meschi, E.; Milenovic, P.; Moortgat, F.; Mulders, M.; Neugebauer, H.; Ngadiuba, J.; Orfanelli, S.; Orsini, L.; Pantaleo, F.; Pape, L.; Perez, E.; Peruzzi, M.; Petrilli, A.; Petrucciani, G.; Pfeiffer, A.; Pierini, M.; Pitters, F. M.; Rabady, D.; Racz, A.; Reis, T.; Rolandi, G.; Rovere, M.; Sakulin, H.; Schäfer, C.; Schwick, C.; Seidel, M.; Selvaggi, M.; Sharma, A.; Silva, P.; Sphicas, P.; Stakia, A.; Steggemann, J.; Stoye, M.; Tosi, M.; Treille, D.; Tsirou, A.; Veckalns, V.; Verweij, M.; Zeuner, W. D.; Bertl, W.; Caminada, L.; Deiters, K.; Erdmann, W.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; Kotlinski, D.; Langenegger, U.; Rohe, T.; Wiederkehr, S. A.; Backhaus, M.; Bäni, L.; Berger, P.; Casal, B.; Dissertori, G.; Dittmar, M.; Donegà, M.; Dorfer, C.; Grab, C.; Heidegger, C.; Hits, D.; Hoss, J.; Klijnsma, T.; Lustermann, W.; Mangano, B.; Marionneau, M.; Meinhard, M. T.; Meister, D.; Micheli, F.; Musella, P.; Nessi-Tedaldi, F.; Pandolfi, F.; Pata, J.; Pauss, F.; Perrin, G.; Perrozzi, L.; Quittnat, M.; Reichmann, M.; Sanz Becerra, D. A.; Schönenberger, M.; Shchutska, L.; Tavolaro, V. R.; Theofilatos, K.; Vesterbacka Olsson, M. L.; Wallny, R.; Zhu, D. H.; Aarrestad, T. K.; Amsler, C.; Brzhechko, D.; Canelli, M. F.; De Cosa, A.; Del Burgo, R.; Donato, S.; Galloni, C.; Hreus, T.; Kilminster, B.; Neutelings, I.; Pinna, D.; Rauco, G.; Robmann, P.; Salerno, D.; Schweiger, K.; Seitz, C.; Takahashi, Y.; Zucchetta, A.; Candelise, V.; Chang, Y. H.; Cheng, K. y.; Doan, T. H.; Jain, Sh.; Khurana, R.; Kuo, C. M.; Lin, W.; Pozdnyakov, A.; Yu, S. S.; Kumar, Arun; Chang, P.; Chao, Y.; Chen, K. F.; Chen, P. H.; Fiori, F.; Hou, W.-S.; Hsiung, Y.; Liu, Y. F.; Lu, R.-S.; Paganis, E.; Psallidas, A.; Steen, A.; Tsai, J. f.; Asavapibhop, B.; Kovitanggoon, K.; Singh, G.; Srimanobhas, N.; Bat, A.; Boran, F.; Cerci, S.; Damarseckin, S.; Demiroglu, Z. S.; Dozen, C.; Dumanoglu, I.; Girgis, S.; Gokbulut, G.; Guler, Y.; Hos, I.; Kangal, E. E.; Kara, O.; Kayis Topaksu, A.; Kiminsu, U.; Oglakci, M.; Onengut, G.; Ozdemir, K.; Sunar Cerci, D.; Tok, U. G.; Topakli, H.; Turkcapar, S.; Zorbakir, I. S.; Zorbilmez, C.; Karapinar, G.; Ocalan, K.; Yalvac, M.; Zeyrek, M.; Gülmez, E.; Kaya, M.; Kaya, O.; Tekten, S.; Yetkin, E. A.; Agaras, M. N.; Atay, S.; Cakir, A.; Cankocak, K.; Komurcu, Y.; Grynyov, B.; Levchuk, L.; Ball, F.; Beck, L.; Brooke, J. J.; Burns, D.; Clement, E.; Cussans, D.; Davignon, O.; Flacher, H.; Goldstein, J.; Heath, G. P.; Heath, H. F.; Kreczko, L.; Newbold, D. M.; Paramesvaran, S.; Sakuma, T.; Seif El Nasr-storey, S.; Smith, D.; Smith, V. J.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Calligaris, L.; Cieri, D.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Linacre, J.; Olaiya, E.; Petyt, D.; Shepherd-Themistocleous, C. H.; Thea, A.; Tomalin, I. R.; Williams, T.; Womersley, W. J.; Auzinger, G.; Bainbridge, R.; Bloch, P.; Borg, J.; Breeze, S.; Buchmuller, O.; Bundock, A.; Casasso, S.; Colling, D.; Corpe, L.; Dauncey, P.; Davies, G.; Della Negra, M.; Di Maria, R.; Haddad, Y.; Hall, G.; Iles, G.; James, T.; Komm, M.; Lane, R.; Laner, C.; Lyons, L.; Magnan, A.-M.; Malik, S.; Mastrolorenzo, L.; Matsushita, T.; Nash, J.; Nikitenko, A.; Palladino, V.; Pesaresi, M.; Richards, A.; Rose, A.; Scott, E.; Seez, C.; Shtipliyski, A.; Strebler, T.; Summers, S.; Tapper, A.; Uchida, K.; Vazquez Acosta, M.; Virdee, T.; Wardle, N.; Winterbottom, D.; Wright, J.; Zenz, S. C.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Morton, A.; Reid, I. D.; Teodorescu, L.; Zahid, S.; Borzou, A.; Call, K.; Dittmann, J.; Hatakeyama, K.; Liu, H.; Pastika, N.; Smith, C.; Bartek, R.; Dominguez, A.; Buccilli, A.; Cooper, S. I.; Henderson, C.; Rumerio, P.; West, C.; Arcaro, D.; Avetisyan, A.; Bose, T.; Gastler, D.; Rankin, D.; Richardson, C.; Rohlf, J.; Sulak, L.; Zou, D.; Benelli, G.; Cutts, D.; Hadley, M.; Hakala, J.; Heintz, U.; Hogan, J. M.; Kwok, K. H. M.; Laird, E.; Landsberg, G.; Lee, J.; Mao, Z.; Narain, M.; Pazzini, J.; Piperov, S.; Sagir, S.; Syarif, R.; Yu, D.; Band, R.; Brainerd, C.; Breedon, R.; Burns, D.; Calderon De La Barca Sanchez, M.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Erbacher, R.; Flores, C.; Funk, G.; Ko, W.; Lander, R.; Mclean, C.; Mulhearn, M.; Pellett, D.; Pilot, J.; Shalhout, S.; Shi, M.; Smith, J.; Stolp, D.; Taylor, D.; Tos, K.; Tripathi, M.; Wang, Z.; Zhang, F.; Bachtis, M.; Bravo, C.; Cousins, R.; Dasgupta, A.; Florent, A.; Hauser, J.; Ignatenko, M.; Mccoll, N.; Regnard, S.; Saltzberg, D.; Schnaible, C.; Valuev, V.; Bouvier, E.; Burt, K.; Clare, R.; Ellison, J.; Gary, J. W.; Ghiasi Shirazi, S. M. A.; Hanson, G.; Karapostoli, G.; Kennedy, E.; Lacroix, F.; Long, O. R.; Olmedo Negrete, M.; Paneva, M. I.; Si, W.; Wang, L.; Wei, H.; Wimpenny, S.; Yates, B. R.; Branson, J. G.; Cittolin, S.; Derdzinski, M.; Gerosa, R.; Gilbert, D.; Hashemi, B.; Holzner, A.; Klein, D.; Kole, G.; Krutelyov, V.; Letts, J.; Masciovecchio, M.; Olivito, D.; Padhi, S.; Pieri, M.; Sani, M.; Sharma, V.; Simon, S.; Tadel, M.; Vartak, A.; Wasserbaech, S.; Wood, J.; Würthwein, F.; Yagil, A.; Zevi Della Porta, G.; Amin, N.; Bhandari, R.; Bradmiller-Feld, J.; Campagnari, C.; Citron, M.; Dishaw, A.; Dutta, V.; Franco Sevilla, M.; Gouskos, L.; Heller, R.; Incandela, J.; Ovcharova, A.; Qu, H.; Richman, J.; Stuart, D.; Suarez, I.; Yoo, J.; Anderson, D.; Bornheim, A.; Bunn, J.; Lawhorn, J. M.; Newman, H. B.; Nguyen, T. Q.; Pena, C.; Spiropulu, M.; Vlimant, J. R.; Wilkinson, R.; Xie, S.; Zhang, Z.; Zhu, R. Y.; Andrews, M. B.; Ferguson, T.; Mudholkar, T.; Paulini, M.; Russ, J.; Sun, M.; Vogel, H.; Vorobiev, I.; Weinberg, M.; Cumalat, J. P.; Ford, W. T.; Jensen, F.; Johnson, A.; Krohn, M.; Leontsinis, S.; Macdonald, E.; Mulholland, T.; Stenson, K.; Ulmer, K. A.; Wagner, S. R.; Alexander, J.; Chaves, J.; Cheng, Y.; Chu, J.; Datta, A.; Dittmer, S.; Mcdermott, K.; Mirman, N.; Patterson, J. R.; Quach, D.; Rinkevicius, A.; Ryd, A.; Skinnari, L.; Soffi, L.; Tan, S. M.; Tao, Z.; Thom, J.; Tucker, J.; Wittich, P.; Zientek, M.; Abdullin, S.; Albrow, M.; Alyari, M.; Apollinari, G.; Apresyan, A.; Apyan, A.; Banerjee, S.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Bolla, G.; Burkett, K.; Butler, J. N.; Canepa, A.; Cerati, G. B.; Cheung, H. W. K.; Chlebana, F.; Cremonesi, M.; Duarte, J.; Elvira, V. D.; Freeman, J.; Gecse, Z.; Gottschalk, E.; Gray, L.; Green, D.; Grünendahl, S.; Gutsche, O.; Hanlon, J.; Harris, R. M.; Hasegawa, S.; Hirschauer, J.; Hu, Z.; Jayatilaka, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Klima, B.; Kreis, B.; Lammel, S.; Lincoln, D.; Lipton, R.; Liu, M.; Liu, T.; Lopes De Sá, R.; Lykken, J.; Maeshima, K.; Magini, N.; Marraffino, J. M.; Mason, D.; McBride, P.; Merkel, P.; Mrenna, S.; Nahn, S.; O'Dell, V.; Pedro, K.; Prokofyev, O.; Rakness, G.; Ristori, L.; Savoy-Navarro, A.; Schneider, B.; Sexton-Kennedy, E.; Soha, A.; Spalding, W. J.; Spiegel, L.; Stoynev, S.; Strait, J.; Strobbe, N.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vernieri, C.; Verzocchi, M.; Vidal, R.; Wang, M.; Weber, H. A.; Whitbeck, A.; Wu, W.; Acosta, D.; Avery, P.; Bortignon, P.; Bourilkov, D.; Brinkerhoff, A.; Carnes, A.; Carver, M.; Curry, D.; Field, R. D.; Furic, I. K.; Gleyzer, S. V.; Joshi, B. M.; Konigsberg, J.; Korytov, A.; Kotov, K.; Ma, P.; Matchev, K.; Mei, H.; Mitselmakher, G.; Shi, K.; Sperka, D.; Terentyev, N.; Thomas, L.; Wang, J.; Wang, S.; Yelton, J.; Joshi, Y. R.; Linn, S.; Markowitz, P.; Rodriguez, J. L.; Ackert, A.; Adams, T.; Askew, A.; Hagopian, S.; Hagopian, V.; Johnson, K. F.; Kolberg, T.; Martinez, G.; Perry, T.; Prosper, H.; Saha, A.; Santra, A.; Sharma, V.; Yohay, R.; Baarmand, M. M.; Bhopatkar, V.; Colafranceschi, S.; Hohlmann, M.; Noonan, D.; Roy, T.; Yumiceva, F.; Adams, M. R.; Apanasevich, L.; Berry, D.; Betts, R. R.; Cavanaugh, R.; Chen, X.; Evdokimov, O.; Gerber, C. E.; Hangal, D. A.; Hofman, D. J.; Jung, K.; Kamin, J.; Sandoval Gonzalez, I. D.; Tonjes, M. B.; Varelas, N.; Wang, H.; Wu, Z.; Zhang, J.; Bilki, B.; Clarida, W.; Dilsiz, K.; Durgut, S.; Gandrajula, R. P.; Haytmyradov, M.; Khristenko, V.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Snyder, C.; Tiras, E.; Wetzel, J.; Yi, K.; Blumenfeld, B.; Cocoros, A.; Eminizer, N.; Fehling, D.; Feng, L.; Gritsan, A. V.; Maksimovic, P.; Roskes, J.; Sarica, U.; Swartz, M.; Xiao, M.; You, C.; Al-bataineh, A.; Baringer, P.; Bean, A.; Boren, S.; Bowen, J.; Castle, J.; Khalil, S.; Kropivnitskaya, A.; Majumder, D.; Mcbrayer, W.; Murray, M.; Rogan, C.; Royon, C.; Sanders, S.; Schmitz, E.; Tapia Takaki, J. D.; Wang, Q.; Ivanov, A.; Kaadze, K.; Maravin, Y.; Mohammadi, A.; Saini, L. K.; Skhirtladze, N.; Rebassoo, F.; Wright, D.; Baden, A.; Baron, O.; Belloni, A.; Eno, S. C.; Feng, Y.; Ferraioli, C.; Hadley, N. J.; Jabeen, S.; Jeng, G. Y.; Kellogg, R. G.; Kunkle, J.; Mignerey, A. C.; Ricci-Tam, F.; Shin, Y. H.; Skuja, A.; Tonwar, S. C.; Abercrombie, D.; Allen, B.; Azzolini, V.; Barbieri, R.; Baty, A.; Bauer, G.; Bi, R.; Brandt, S.; Busza, W.; Cali, I. A.; D'Alfonso, M.; Demiragli, Z.; Gomez Ceballos, G.; Goncharov, M.; Harris, P.; Hsu, D.; Hu, M.; Iiyama, Y.; Innocenti, G. M.; Klute, M.; Kovalskyi, D.; Lee, Y.-J.; Levin, A.; Luckey, P. D.; Maier, B.; Marini, A. C.; Mcginn, C.; Mironov, C.; Narayanan, S.; Niu, X.; Paus, C.; Roland, C.; Roland, G.; Salfeld-Nebgen, J.; Stephans, G. S. F.; Sumorok, K.; Tatar, K.; Velicanu, D.; Wang, J.; Wang, T. W.; Wyslouch, B.; Zhaozhong, S.; Benvenuti, A. C.; Chatterjee, R. M.; Evans, A.; Hansen, P.; Kalafut, S.; Kubota, Y.; Lesko, Z.; Mans, J.; Nourbakhsh, S.; Ruckstuhl, N.; Rusack, R.; Turkewitz, J.; Wadud, M. A.; Acosta, J. G.; Oliveros, S.; Avdeeva, E.; Bloom, K.; Claes, D. R.; Fangmeier, C.; Golf, F.; Gonzalez Suarez, R.; Kamalieddin, R.; Kravchenko, I.; Monroy, J.; Siado, J. E.; Snow, G. R.; Stieger, B.; Dolen, J.; Godshalk, A.; Harrington, C.; Iashvili, I.; Nguyen, D.; Parker, A.; Rappoccio, S.; Roozbahani, B.; Alverson, G.; Barberis, E.; Freer, C.; Hortiangtham, A.; Massironi, A.; Morse, D. M.; Orimoto, T.; Teixeira De Lima, R.; Wamorkar, T.; Wang, B.; Wisecarver, A.; Wood, D.; Bhattacharya, S.; Charaf, O.; Hahn, K. A.; Mucia, N.; Odell, N.; Schmitt, M. H.; Sung, K.; Trovato, M.; Velasco, M.; Bucci, R.; Dev, N.; Hildreth, M.; Hurtado Anampa, K.; Jessop, C.; Karmgard, D. J.; Kellams, N.; Lannon, K.; Li, W.; Loukas, N.; Marinelli, N.; Meng, F.; Mueller, C.; Musienko, Y.; Planer, M.; Reinsvold, A.; Ruchti, R.; Siddireddy, P.; Smith, G.; Taroni, S.; Wayne, M.; Wightman, A.; Wolf, M.; Woodard, A.; Alimena, J.; Antonelli, L.; Bylsma, B.; Durkin, L. S.; Flowers, S.; Francis, B.; Hart, A.; Hill, C.; Ji, W.; Ling, T. Y.; Luo, W.; Winer, B. L.; Wulsin, H. W.; Cooperstein, S.; Driga, O.; Elmer, P.; Hardenbrook, J.; Hebda, P.; Higginbotham, S.; Kalogeropoulos, A.; Lange, D.; Luo, J.; Marlow, D.; Mei, K.; Ojalvo, I.; Olsen, J.; Palmer, C.; Piroué, P.; Stickland, D.; Tully, C.; Malik, S.; Norberg, S.; Barker, A.; Barnes, V. E.; Das, S.; Gutay, L.; Jones, M.; Jung, A. W.; Khatiwada, A.; Miller, D. H.; Neumeister, N.; Peng, C. C.; Qiu, H.; Schulte, J. F.; Sun, J.; Wang, F.; Xiao, R.; Xie, W.; Cheng, T.; Parashar, N.; Chen, Z.; Ecklund, K. M.; Freed, S.; Geurts, F. J. M.; Guilbaud, M.; Kilpatrick, M.; Li, W.; Michlin, B.; Padley, B. P.; Roberts, J.; Rorie, J.; Shi, W.; Tu, Z.; Zabel, J.; Zhang, A.; Bodek, A.; de Barbaro, P.; Demina, R.; Duh, Y. t.; Ferbel, T.; Galanti, M.; Garcia-Bellido, A.; Han, J.; Hindrichs, O.; Khukhunaishvili, A.; Lo, K. H.; Tan, P.; Verzetti, M.; Ciesielski, R.; Goulianos, K.; Mesropian, C.; Agapitos, A.; Chou, J. P.; Gershtein, Y.; Gómez Espinosa, T. A.; Halkiadakis, E.; Heindl, M.; Hughes, E.; Kaplan, S.; Kunnawalkam Elayavalli, R.; Kyriacou, S.; Lath, A.; Montalvo, R.; Nash, K.; Osherson, M.; Saka, H.; Salur, S.; Schnetzer, S.; Sheffield, D.; Somalwar, S.; Stone, R.; Thomas, S.; Thomassen, P.; Walker, M.; Delannoy, A. G.; Heideman, J.; Riley, G.; Rose, K.; Spanier, S.; Thapa, K.; Bouhali, O.; Castaneda Hernandez, A.; Celik, A.; Dalchenko, M.; De Mattia, M.; Delgado, A.; Dildick, S.; Eusebi, R.; Gilmore, J.; Huang, T.; Kamon, T.; Mueller, R.; Pakhotin, Y.; Patel, R.; Perloff, A.; Perniè, L.; Rathjens, D.; Safonov, A.; Tatarinov, A.; Akchurin, N.; Damgov, J.; De Guio, F.; Dudero, P. R.; Faulkner, J.; Gurpinar, E.; Kunori, S.; Lamichhane, K.; Lee, S. W.; Mengke, T.; Muthumuni, S.; Peltola, T.; Undleeb, S.; Volobouev, I.; Wang, Z.; Greene, S.; Gurrola, A.; Janjam, R.; Johns, W.; Maguire, C.; Melo, A.; Ni, H.; Padeken, K.; Sheldon, P.; Tuo, S.; Velkovska, J.; Xu, Q.; Arenton, M. W.; Barria, P.; Cox, B.; Hirosky, R.; Joyce, M.; Ledovskoy, A.; Li, H.; Neu, C.; Sinthuprasith, T.; Wang, Y.; Wolfe, E.; Xia, F.; Harr, R.; Karchin, P. E.; Poudyal, N.; Sturdy, J.; Thapa, P.; Zaleski, S.; Brodski, M.; Buchanan, J.; Caillol, C.; Carlsmith, D.; Dasu, S.; Dodd, L.; Duric, S.; Gomber, B.; Grothe, M.; Herndon, M.; Hervé, A.; Hussain, U.; Klabbers, P.; Lanaro, A.; Levine, A.; Long, K.; Loveless, R.; Rekovic, V.; Ruggles, T.; Savin, A.; Smith, N.; Smith, W. H.; Woods, N.
2018-06-01
Measurements of differential t\\overline{t} production cross sections are presented in the single-lepton decay channel, as a function of a number of kinematic event variables. The measurements are performed with proton-proton collision data at √{s}=13 TeV, collected by the CMS experiment at the LHC during 2016, with an integrated luminosity of 35.9 fb-1. The data are compared to a variety of state-of-the-art leading-order and next-to-leading-order t\\overline{t} simulations. [Figure not available: see fulltext.
The Evolution of On-Board Emergency Training for the International Space Station Crew
NASA Technical Reports Server (NTRS)
LaBuff, Skyler
2015-01-01
The crew of the International Space Station (ISS) receives extensive ground-training in order to safely and effectively respond to any potential emergency event while on-orbit, but few people realize that their training is not concluded when they launch into space. The evolution of the emergency On- Board Training events (OBTs) has recently moved from paper "scripts" to an intranet-based software simulation that allows for the crew, as well as the flight control teams in Mission Control Centers across the world, to share in an improved and more realistic training event. This emergency OBT simulator ensures that the participants experience the training event as it unfolds, completely unaware of the type, location, or severity of the simulated emergency until the scenario begins. The crew interfaces with the simulation software via iPads that they keep with them as they translate through the ISS modules, receiving prompts and information as they proceed through the response. Personnel in the control centers bring up the simulation via an intranet browser at their console workstations, and can view additional telemetry signatures in simulated ground displays in order to assist the crew and communicate vital information to them as applicable. The Chief Training Officers and emergency instructors set the simulation in motion, choosing the type of emergency (rapid depressurization, fire, or toxic atmosphere) and specific initial conditions to emphasize the desired training objectives. Project development, testing, and implementation was a collaborative effort between ISS emergency instructors, Chief Training Officers, Flight Directors, and the Crew Office using commercial off the shelf (COTS) hardware along with simulation software created in-house. Due to the success of the Emergency OBT simulator, the already-developed software has been leveraged and repurposed to develop a new emulator used during fire response ground-training to deliver data that the crew receives from the handheld Compound Specific Analyzer for Combustion Products (CSA-CP). This CSA-CP emulator makes use of a portion of codebase from the Emergency OBT simulator dealing with atmospheric contamination during fire scenarios, and feeds various data signatures to crew via an iPod Touch with a flight-like CSA-CP display. These innovative simulations, which make use of COTS hardware with custom in-house software, have yielded drastic improvements to emergency training effectiveness and risk reduction for ISS crew and flight control teams during on-orbit and ground training events.
NASA Astrophysics Data System (ADS)
Chang, W.; Wang, J.; Marohnic, J.; Kotamarthi, V. R.; Moyer, E. J.
2017-12-01
We use a novel rainstorm identification and tracking algorithm (Chang et al 2016) to evaluate the effects of using resolved convection on improving how faithfully high-resolution regional simulations capture precipitation characteristics. The identification and tracking algorithm allocates all precipitation to individual rainstorms, including low-intensity events with complicated features, and allows us to decompose changes or biases in total mean precipitation into their causes: event size, intensity, number, and duration. It allows lower threshold for tracking so captures nearly all rainfall and improves tracking, so that events that are clearly meteorologically related are tracked across lifespans up to days. We evaluate a series of dynamically downscaled simulations of the summertime United States at 12 and 4 km under different model configurations, and find that resolved convection offers the largest gains in reducing biases in precipitation characteristics, especially in event size. Simulations with parametrized convection produce event sizes 80-220% too large in extent; with resolved convection the bias is reduced to 30%. The identification and tracking algorithm also allows us to demonstrate that the diurnal cycle in rainfall stems not from temporal variation in the production of new events but from diurnal fluctuations in rainfall from existing events. We show further hat model errors in the diurnal cycle biases are best represented as additive offsets that differ by time of day, and again that convection-permitting simulations are most efficient in reducing these additive biases.
Minor, A V; Kaissling, K-E
2003-03-01
Olfactory receptor cells of the silkmoth Bombyx mori respond to single pheromone molecules with "elementary" electrical events that appear as discrete "bumps" a few milliseconds in duration, or bursts of bumps. As revealed by simulation, one bump may result from a series of random openings of one or several ion channels, producing an average inward membrane current of 1.5 pA. The distributions of durations of bumps and of gaps between bumps in a burst can be fitted by single exponentials with time constants of 10.2 ms and 40.5 ms, respectively. The distribution of burst durations is a sum of two exponentials; the number of bumps per burst obeyed a geometric distribution (mean 3.2 bumps per burst). Accordingly the elementary events could reflect transitions among three states of the pheromone receptor molecule: the vacant receptor (state 1), the pheromone-receptor complex (state 2), and the activated complex (state 3). The calculated rate constants of the transitions between states are k(21)=7.7 s(-1), k(23)=16.8 s(-1), and k(32)=98 s(-1).
Initialization and Simulation of Three-Dimensional Aircraft Wake Vortices
NASA Technical Reports Server (NTRS)
Ash, Robert L.; Zheng, Z. C.
1997-01-01
This paper studies the effects of axial velocity profiles on vortex decay, in order to properly initialize and simulate three-dimensional wake vortex flow. Analytical relationships are obtained based on a single vortex model and computational simulations are performed for a rather practical vortex wake, which show that the single vortex analytical relations can still be applicable at certain streamwise sections of three-dimensional wake vortices.
Ndindjock, Roger; Gedeon, Jude; Mendis, Shanthi; Paccaud, Fred
2011-01-01
Abstract Objective To assess the prevalence of cardiovascular (CV) risk factors in Seychelles, a middle-income African country, and compare the cost-effectiveness of single-risk-factor management (treating individuals with arterial blood pressure ≥ 140/90 mmHg and/or total serum cholesterol ≥ 6.2 mmol/l) with that of management based on total CV risk (treating individuals with a total CV risk ≥ 10% or ≥ 20%). Methods CV risk factor prevalence and a CV risk prediction chart for Africa were used to estimate the 10-year risk of suffering a fatal or non-fatal CV event among individuals aged 40–64 years. These figures were used to compare single-risk-factor management with total risk management in terms of the number of people requiring treatment to avert one CV event and the number of events potentially averted over 10 years. Treatment for patients with high total CV risk (≥ 20%) was assumed to consist of a fixed-dose combination of several drugs (polypill). Cost analyses were limited to medication. Findings A total CV risk of ≥ 10% and ≥ 20% was found among 10.8% and 5.1% of individuals, respectively. With single-risk-factor management, 60% of adults would need to be treated and 157 cardiovascular events per 100 000 population would be averted per year, as opposed to 5% of adults and 92 events with total CV risk management. Management based on high total CV risk optimizes the balance between the number requiring treatment and the number of CV events averted. Conclusion Total CV risk management is much more cost-effective than single-risk-factor management. These findings are relevant for all countries, but especially for those economically and demographically similar to Seychelles. PMID:21479093
NASA Astrophysics Data System (ADS)
Hoose, C.; Lohmann, U.; Stier, P.; Verheggen, B.; Weingartner, E.; Herich, H.
2007-12-01
The global aerosol-climate model ECHAM5-HAM (Stier et al., 2005) has been extended by an explicit treatment of cloud-borne particles. Two additional modes for in-droplet and in-crystal particles are introduced, which are coupled to the number of cloud droplet and ice crystal concentrations simulated by the ECHAM5 double-moment cloud microphysics scheme (Lohmann et al., 2007). Transfer, production and removal of cloud-borne aerosol number and mass by cloud droplet activation, collision scavenging, aqueous-phase sulfate production, freezing, melting, evaporation, sublimation and precipitation formation are taken into account. The model performance is demonstrated and validated with observations of the evolution of total and interstitial aerosol concentrations and size distributions during three different mixed-phase cloud events at the alpine high-altitude research station Jungfraujoch (Switzerland) (Verheggen et al, 2007). Although the single-column simulations can not be compared one-to-one with the observations, the governing processes in the evolution of the cloud and aerosol parameters are captured qualitatively well. High scavenged fractions are found during the presence of liquid water, while the release of particles during the Bergeron-Findeisen process results in low scavenged fractions after cloud glaciation. The observed coexistence of liquid and ice, which might be related to cloud heterogeneity at subgrid scales, can only be simulated in the model when forcing non-equilibrium conditions. References: U. Lohmann et al., Cloud microphysics and aerosol indirect effects in the global climate model ECHAM5-HAM, Atmos. Chem. Phys. 7, 3425-3446 (2007) P. Stier et al., The aerosol-climate model ECHAM5-HAM, Atmos. Chem. Phys. 5, 1125-1156 (2005) B. Verheggen et al., Aerosol partitioning between the interstitial and the condensed phase in mixed-phase clouds, Accepted for publication in J. Geophys. Res. (2007)
Temperature dependence of single-event burnout in n-channel power MOSFET's
NASA Astrophysics Data System (ADS)
Johnson, G. H.; Schrimpf, R. D.; Galloway, K. F.; Koga, R.
1994-03-01
The temperature dependence of single-event burnout (SEB) in n-channel power metal-oxide-semiconductor field effect transistors (MOSFET's) is investigated experimentally and analytically. Experimental data are presented which indicate that the SEB susceptibility of the power MOSFET decreases with increasing temperature. A previously reported analytical model that describes the SEB mechanism is updated to include temperature variations. This model is shown to agree with the experimental trends.
Wei, Ching-Yun; Quek, Ruben G W; Villa, Guillermo; Gandra, Shravanthi R; Forbes, Carol A; Ryder, Steve; Armstrong, Nigel; Deshpande, Sohan; Duffy, Steven; Kleijnen, Jos; Lindgren, Peter
2017-03-01
Previous reviews have evaluated economic analyses of lipid-lowering therapies using lipid levels as surrogate markers for cardiovascular disease. However, drug approval and health technology assessment agencies have stressed that surrogates should only be used in the absence of clinical endpoints. The aim of this systematic review was to identify and summarise the methodologies, weaknesses and strengths of economic models based on atherosclerotic cardiovascular disease event rates. Cost-effectiveness evaluations of lipid-lowering therapies using cardiovascular event rates in adults with hyperlipidaemia were sought in Medline, Embase, Medline In-Process, PubMed and NHS EED and conference proceedings. Search results were independently screened, extracted and quality checked by two reviewers. Searches until February 2016 retrieved 3443 records, from which 26 studies (29 publications) were selected. Twenty-two studies evaluated secondary prevention (four also assessed primary prevention), two considered only primary prevention and two included mixed primary and secondary prevention populations. Most studies (18) based treatment-effect estimates on single trials, although more recent evaluations deployed meta-analyses (5/10 over the last 10 years). Markov models (14 studies) were most commonly used and only one study employed discrete event simulation. Models varied particularly in terms of health states and treatment-effect duration. No studies used a systematic review to obtain utilities. Most studies took a healthcare perspective (21/26) and sourced resource use from key trials instead of local data. Overall, reporting quality was suboptimal. This review reveals methodological changes over time, but reporting weaknesses remain, particularly with respect to transparency of model reporting.
Event-driven simulation in SELMON: An overview of EDSE
NASA Technical Reports Server (NTRS)
Rouquette, Nicolas F.; Chien, Steve A.; Charest, Leonard, Jr.
1992-01-01
EDSE (event-driven simulation engine), a model-based event-driven simulator implemented for SELMON, a tool for sensor selection and anomaly detection in real-time monitoring is described. The simulator is used in conjunction with a causal model to predict future behavior of the model from observed data. The behavior of the causal model is interpreted as equivalent to the behavior of the physical system being modeled. An overview of the functionality of the simulator and the model-based event-driven simulation paradigm on which it is based is provided. Included are high-level descriptions of the following key properties: event consumption and event creation, iterative simulation, synchronization and filtering of monitoring data from the physical system. Finally, how EDSE stands with respect to the relevant open issues of discrete-event and model-based simulation is discussed.
Zavalko, I M; Rasskazova, E I; Gordeev, S A; Palatov, S Iu; Kovrov, G V
2013-01-01
The purpose of the research was to study effect of long-term isolation on night sleep. The data were collected during international ground simulation of an interplanetary manned flight--"Mars-500". The polysomnographic recordings of six healthy men were performed before, four times during and after 520-days confinement. During the isolation sleep efficiency and delta-latency decreased, while sleep latency increased. Post-hoc analysis demonstrate significant differences between background and the last (1.5 months before the end of the experiment) measure during isolation. Frequency of nights with low sleep efficiency rose on the eve of the important for the crew events (simulation of Mars landing and the end of the confinement). Two weeks after the landing simulation, amount of the nights with a low sleep efficiency significantly decreased. Therefore, anticipation of significant event under condition of long-term isolation might result in sleep worsening in previously healthy men, predominantly difficulties getting to sleep.
NASA Astrophysics Data System (ADS)
Thomas, Marion Y.; Lapusta, Nadia; Noda, Hiroyuki; Avouac, Jean-Philippe
2014-03-01
Physics-based numerical simulations of earthquakes and slow slip, coupled with field observations and laboratory experiments, can, in principle, be used to determine fault properties and potential fault behaviors. Because of the computational cost of simulating inertial wave-mediated effects, their representation is often simplified. The quasi-dynamic (QD) approach approximately accounts for inertial effects through a radiation damping term. We compare QD and fully dynamic (FD) simulations by exploring the long-term behavior of rate-and-state fault models with and without additional weakening during seismic slip. The models incorporate a velocity-strengthening (VS) patch in a velocity-weakening (VW) zone, to consider rupture interaction with a slip-inhibiting heterogeneity. Without additional weakening, the QD and FD approaches generate qualitatively similar slip patterns with quantitative differences, such as slower slip velocities and rupture speeds during earthquakes and more propensity for rupture arrest at the VS patch in the QD cases. Simulations with additional coseismic weakening produce qualitatively different patterns of earthquakes, with near-periodic pulse-like events in the FD simulations and much larger crack-like events accompanied by smaller events in the QD simulations. This is because the FD simulations with additional weakening allow earthquake rupture to propagate at a much lower level of prestress than the QD simulations. The resulting much larger ruptures in the QD simulations are more likely to propagate through the VS patch, unlike for the cases with no additional weakening. Overall, the QD approach should be used with caution, as the QD simulation results could drastically differ from the true response of the physical model considered.
Yiu, Sean; Farewell, Vernon T; Tom, Brian D M
2017-08-01
Many psoriatic arthritis patients do not progress to permanent joint damage in any of the 28 hand joints, even under prolonged follow-up. This has led several researchers to fit models that estimate the proportion of stayers (those who do not have the propensity to experience the event of interest) and to characterize the rate of developing damaged joints in the movers (those who have the propensity to experience the event of interest). However, when fitted to the same data, the paper demonstrates that the choice of model for the movers can lead to widely varying conclusions on a stayer population, thus implying that, if interest lies in a stayer population, a single analysis should not generally be adopted. The aim of the paper is to provide greater understanding regarding estimation of a stayer population by comparing the inferences, performance and features of multiple fitted models to real and simulated data sets. The models for the movers are based on Poisson processes with patient level random effects and/or dynamic covariates, which are used to induce within-patient correlation, and observation level random effects are used to account for time varying unobserved heterogeneity. The gamma, inverse Gaussian and compound Poisson distributions are considered for the random effects.
What Reliability Engineers Should Know about Space Radiation Effects
NASA Technical Reports Server (NTRS)
DiBari, Rebecca
2013-01-01
Space radiation in space systems present unique failure modes and considerations for reliability engineers. Radiation effects is not a one size fits all field. Threat conditions that must be addressed for a given mission depend on the mission orbital profile, the technologies of parts used in critical functions and on application considerations, such as supply voltages, temperature, duty cycle, and redundancy. In general, the threats that must be addressed are of two types-the cumulative degradation mechanisms of total ionizing dose (TID) and displacement damage (DD). and the prompt responses of components to ionizing particles (protons and heavy ions) falling under the heading of single-event effects. Generally degradation mechanisms behave like wear-out mechanisms on any active components in a system: Total Ionizing Dose (TID) and Displacement Damage: (1) TID affects all active devices over time. Devices can fail either because of parametric shifts that prevent the device from fulfilling its application or due to device failures where the device stops functioning altogether. Since this failure mode varies from part to part and lot to lot, lot qualification testing with sufficient statistics is vital. Displacement damage failures are caused by the displacement of semiconductor atoms from their lattice positions. As with TID, failures can be either parametric or catastrophic, although parametric degradation is more common for displacement damage. Lot testing is critical not just to assure proper device fi.mctionality throughout the mission. It can also suggest remediation strategies when a device fails. This paper will look at these effects on a variety of devices in a variety of applications. This paper will look at these effects on a variety of devices in a variety of applications. (2) On the NEAR mission a functional failure was traced to a PIN diode failure caused by TID induced high leakage currents. NEAR was able to recover from the failure by reversing the current of a nearby Thermal Electric Cooler (turning the TEC into a heater). The elevated temperature caused the PIN diode to anneal and the device to recover. It was by lot qualification testing that NEAR knew the diode would recover when annealed. This paper will look at these effects on a variety of devices in a variety of applications. Single Event Effects (SEE): (1) In contrast to TID and displacement damage, Single Event Effects (SEE) resemble random failures. SEE modes can range from changes in device logic (single-event upset, or SEU). temporary disturbances (single-event transient) to catastrophic effects such as the destructive SEE modes, single-event latchup (SEL). single-event gate rupture (SEGR) and single-event burnout (SEB) (2) The consequences of nondestructive SEE modes such as SEU and SET depend critically on their application--and may range from trivial nuisance errors to catastrophic loss of mission. It is critical not just to ensure that potentially susceptible devices are well characterized for their susceptibility, but also to work with design engineers to understand the implications of each error mode. -For destructive SEE, the predominant risk mitigation strategy is to avoid susceptible parts, or if that is not possible. to avoid conditions under which the part may be susceptible. Destructive SEE mechanisms are often not well understood, and testing is slow and expensive, making rate prediction very challenging. (3) Because the consequences of radiation failure and degradation modes depend so critically on the application as well as the component technology, it is essential that radiation, component. design and system engineers work togetherpreferably starting early in the program to ensure critical applications are addressed in time to optimize the probability of mission success.