Sample records for source simulating single-event

  1. Modeling of single event transients with dual double-exponential current sources: Implications for logic cell characterization

    DOE PAGES

    Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...

    2015-08-07

    Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less

  2. Stochastic summation of empirical Green's functions

    USGS Publications Warehouse

    Wennerberg, Leif

    1990-01-01

    Two simple strategies are presented that use random delay times for repeatedly summing the record of a relatively small earthquake to simulate the effects of a larger earthquake. The simulations do not assume any fault plane geometry or rupture dynamics, but realy only on the ω−2 spectral model of an earthquake source and elementary notions of source complexity. The strategies simulate ground motions for all frequencies within the bandwidth of the record of the event used as a summand. The first strategy, which introduces the basic ideas, is a single-stage procedure that consists of simply adding many small events with random time delays. The probability distribution for delays has the property that its amplitude spectrum is determined by the ratio of ω−2 spectra, and its phase spectrum is identically zero. A simple expression is given for the computation of this zero-phase scaling distribution. The moment rate function resulting from the single-stage simulation is quite simple and hence is probably not realistic for high-frequency (>1 Hz) ground motion of events larger than ML∼ 4.5 to 5. The second strategy is a two-stage summation that simulates source complexity with a few random subevent delays determined using the zero-phase scaling distribution, and then clusters energy around these delays to get an ω−2 spectrum for the sum. Thus, the two-stage strategy allows simulations of complex events of any size for which the ω−2 spectral model applies. Interestingly, a single-stage simulation with too few ω−2records to get a good fit to an ω−2 large-event target spectrum yields a record whose spectral asymptotes are consistent with the ω−2 model, but that includes a region in its spectrum between the corner frequencies of the larger and smaller events reasonably approximated by a power law trend. This spectral feature has also been discussed as reflecting the process of partial stress release (Brune, 1970), an asperity failure (Boatwright, 1984), or the breakdown of ω−2 scaling due to rupture significantly longer than the width of the seismogenic zone (Joyner, 1984).

  3. Single Event Upset Analysis: On-orbit performance of the Alpha Magnetic Spectrometer Digital Signal Processor Memory aboard the International Space Station

    NASA Astrophysics Data System (ADS)

    Li, Jiaqiang; Choutko, Vitaly; Xiao, Liyi

    2018-03-01

    Based on the collection of error data from the Alpha Magnetic Spectrometer (AMS) Digital Signal Processors (DSP), on-orbit Single Event Upsets (SEUs) of the DSP program memory are analyzed. The daily error distribution and time intervals between errors are calculated to evaluate the reliability of the system. The particle density distribution of International Space Station (ISS) orbit is presented and the effects from the South Atlantic Anomaly (SAA) and the geomagnetic poles are analyzed. The impact of solar events on the DSP program memory is carried out combining data analysis and Monte Carlo simulation (MC). From the analysis and simulation results, it is concluded that the area corresponding to the SAA is the main source of errors on the ISS orbit. Solar events can also cause errors on DSP program memory, but the effect depends on the on-orbit particle density.

  4. SU-G-IeP4-12: Performance of In-111 Coincident Gamma-Ray Counting: A Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pahlka, R; Kappadath, S; Mawlawi, O

    2016-06-15

    Purpose: The decay of In-111 results in a non-isotropic gamma-ray cascade, which is normally imaged using a gamma camera. Creating images with a gamma camera using coincident gamma-rays from In-111 has not been previously studied. Our objective was to explore the feasibility of imaging this cascade as coincidence events and to determine the optimal timing resolution and source activity using Monte Carlo simulations. Methods: GEANT4 was used to simulate the decay of the In-111 nucleus and to model the gamma camera. Each photon emission was assigned a timestamp, and the time delay and angular separation for the second gamma-ray inmore » the cascade was consistent with the known intermediate state half-life of 85ns. The gamma-rays are transported through a model of a Siemens dual head Symbia “S” gamma camera with a 5/8-inch thick crystal and medium energy collimators. A true coincident event was defined as a single 171keV gamma-ray followed by a single 245keV gamma-ray within a specified time window (or vice versa). Several source activities (ranging from 10uCi to 5mCi) with and without incorporation of background counts were then simulated. Each simulation was analyzed using varying time windows to assess random events. The noise equivalent count rate (NECR) was computed based on the number of true and random counts for each combination of activity and time window. No scatter events were assumed since sources were simulated in air. Results: As expected, increasing the timing window increased the total number of observed coincidences albeit at the expense of true coincidences. A timing window range of 200–500ns maximizes the NECR at clinically-used source activities. The background rate did not significantly alter the maximum NECR. Conclusion: This work suggests coincident measurements of In-111 gamma-ray decay can be performed with commercial gamma cameras at clinically-relevant activities. Work is ongoing to assess useful clinical applications.« less

  5. Single-event burnout hardening of planar power MOSFET with partially widened trench source

    NASA Astrophysics Data System (ADS)

    Lu, Jiang; Liu, Hainan; Cai, Xiaowu; Luo, Jiajun; Li, Bo; Li, Binhong; Wang, Lixin; Han, Zhengsheng

    2018-03-01

    We present a single-event burnout (SEB) hardened planar power MOSFET with partially widened trench sources by three-dimensional (3D) numerical simulation. The advantage of the proposed structure is that the work of the parasitic bipolar transistor inherited in the power MOSFET is suppressed effectively due to the elimination of the most sensitive region (P-well region below the N+ source). The simulation result shows that the proposed structure can enhance the SEB survivability significantly. The critical value of linear energy transfer (LET), which indicates the maximum deposited energy on the device without SEB behavior, increases from 0.06 to 0.7 pC/μm. The SEB threshold voltage increases to 120 V, which is 80% of the rated breakdown voltage. Meanwhile, the main parameter characteristics of the proposed structure remain similar with those of the conventional planar structure. Therefore, this structure offers a potential optimization path to planar power MOSFET with high SEB survivability for space and atmospheric applications. Project supported by the National Natural Science Foundation of China (Nos. 61404161, 61404068, 61404169).

  6. DETECTORS AND EXPERIMENTAL METHODS: Equivalent properties of single event burnout induced by different sources

    NASA Astrophysics Data System (ADS)

    Yang, Shi-Yu; Cao, Zhou; Da, Dao-An; Xue, Yu-Xiong

    2009-05-01

    The experimental results of single event burnout induced by heavy ions and 252Cf fission fragments in power MOSFET devices have been investigated. It is concluded that the characteristics of single event burnout induced by 252Cf fission fragments is consistent to that in heavy ions. The power MOSFET in the “turn-off" state is more susceptible to single event burnout than it is in the “turn-on" state. The thresholds of the drain-source voltage for single event burnout induced by 173 MeV bromine ions and 252Cf fission fragments are close to each other, and the burnout cross section is sensitive to variation of the drain-source voltage above the threshold of single event burnout. In addition, the current waveforms of single event burnouts induced by different sources are similar. Different power MOSFET devices may have different probabilities for the occurrence of single event burnout.

  7. Characterizing scintillator detector response for correlated fission experiments with MCNP and associated packages

    DOE PAGES

    Andrews, M. T.; Rising, M. E.; Meierbachtol, K.; ...

    2018-06-15

    Wmore » hen multiple neutrons are emitted in a fission event they are correlated in both energy and their relative angle, which may impact the design of safeguards equipment and other instrumentation for non-proliferation applications. The most recent release of MCNP 6 . 2 contains the capability to simulate correlated fission neutrons using the event generators CGMF and FREYA . These radiation transport simulations will be post-processed by the detector response code, DRiFT , and compared directly to correlated fission measurements. DRiFT has been previously compared to single detector measurements, its capabilities have been recently expanded with correlated fission simulations in mind. Finally, this paper details updates to DRiFT specific to correlated fission measurements, including tracking source particle energy of all detector events (and non-events), expanded output formats, and digitizer waveform generation.« less

  8. Characterizing scintillator detector response for correlated fission experiments with MCNP and associated packages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, M. T.; Rising, M. E.; Meierbachtol, K.

    Wmore » hen multiple neutrons are emitted in a fission event they are correlated in both energy and their relative angle, which may impact the design of safeguards equipment and other instrumentation for non-proliferation applications. The most recent release of MCNP 6 . 2 contains the capability to simulate correlated fission neutrons using the event generators CGMF and FREYA . These radiation transport simulations will be post-processed by the detector response code, DRiFT , and compared directly to correlated fission measurements. DRiFT has been previously compared to single detector measurements, its capabilities have been recently expanded with correlated fission simulations in mind. Finally, this paper details updates to DRiFT specific to correlated fission measurements, including tracking source particle energy of all detector events (and non-events), expanded output formats, and digitizer waveform generation.« less

  9. Evaluating average and atypical response in radiation effects simulations

    NASA Astrophysics Data System (ADS)

    Weller, R. A.; Sternberg, A. L.; Massengill, L. W.; Schrimpf, R. D.; Fleetwood, D. M.

    2003-12-01

    We examine the limits of performing single-event simulations using pre-averaged radiation events. Geant4 simulations show the necessity, for future devices, to supplement current methods with ensemble averaging of device-level responses to physically realistic radiation events. Initial Monte Carlo simulations have generated a significant number of extremal events in local energy deposition. These simulations strongly suggest that proton strikes of sufficient energy, even those that initiate purely electronic interactions, can initiate device response capable in principle of producing single event upset or microdose damage in highly scaled devices.

  10. Simulation of Rate-Related (Dead-Time) Losses In Passive Neutron Multiplicity Counting Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, L.G.; Norman, P.I.; Leadbeater, T.W.

    Passive Neutron Multiplicity Counting (PNMC) based on Multiplicity Shift Register (MSR) electronics (a form of time correlation analysis) is a widely used non-destructive assay technique for quantifying spontaneously fissile materials such as Pu. At high event rates, dead-time losses perturb the count rates with the Singles, Doubles and Triples being increasingly affected. Without correction these perturbations are a major source of inaccuracy in the measured count rates and assay values derived from them. This paper presents the simulation of dead-time losses and investigates the effect of applying different dead-time models on the observed MSR data. Monte Carlo methods have beenmore » used to simulate neutron pulse trains for a variety of source intensities and with ideal detection geometry, providing an event by event record of the time distribution of neutron captures within the detection system. The action of the MSR electronics was modelled in software to analyse these pulse trains. Stored pulse trains were perturbed in software to apply the effects of dead-time according to the chosen physical process; for example, the ideal paralysable (extending) and non-paralysable models with an arbitrary dead-time parameter. Results of the simulations demonstrate the change in the observed MSR data when the system dead-time parameter is varied. In addition, the paralysable and non-paralysable models of deadtime are compared. These results form part of a larger study to evaluate existing dead-time corrections and to extend their application to correlated sources. (authors)« less

  11. LUXSim: A component-centric approach to low-background simulations

    DOE PAGES

    Akerib, D. S.; Bai, X.; Bedikian, S.; ...

    2012-02-13

    Geant4 has been used throughout the nuclear and high-energy physics community to simulate energy depositions in various detectors and materials. These simulations have mostly been run with a source beam outside the detector. In the case of low-background physics, however, a primary concern is the effect on the detector from radioactivity inherent in the detector parts themselves. From this standpoint, there is no single source or beam, but rather a collection of sources with potentially complicated spatial extent. LUXSim is a simulation framework used by the LUX collaboration that takes a component-centric approach to event generation and recording. A newmore » set of classes allows for multiple radioactive sources to be set within any number of components at run time, with the entire collection of sources handled within a single simulation run. Various levels of information can also be recorded from the individual components, with these record levels also being set at runtime. This flexibility in both source generation and information recording is possible without the need to recompile, reducing the complexity of code management and the proliferation of versions. Within the code itself, casting geometry objects within this new set of classes rather than as the default Geant4 classes automatically extends this flexibility to every individual component. No additional work is required on the part of the developer, reducing development time and increasing confidence in the results. Here, we describe the guiding principles behind LUXSim, detail some of its unique classes and methods, and give examples of usage.« less

  12. Toward Improving Predictability of Extreme Hydrometeorological Events: the Use of Multi-scale Climate Modeling in the Northern High Plains

    NASA Astrophysics Data System (ADS)

    Munoz-Arriola, F.; Torres-Alavez, J.; Mohamad Abadi, A.; Walko, R. L.

    2014-12-01

    Our goal is to investigate possible sources of predictability of hydrometeorological extreme events in the Northern High Plains. Hydrometeorological extreme events are considered the most costly natural phenomena. Water deficits and surpluses highlight how the water-climate interdependence becomes crucial in areas where single activities drive economies such as Agriculture in the NHP. Nonetheless we recognize the Water-Climate interdependence and the regulatory role that human activities play, we still grapple to identify what sources of predictability could be added to flood and drought forecasts. To identify the benefit of multi-scale climate modeling and the role of initial conditions on flood and drought predictability on the NHP, we use the Ocean Land Atmospheric Model (OLAM). OLAM is characterized by a dynamic core with a global geodesic grid with hexagonal (and variably refined) mesh cells and a finite volume discretization of the full compressible Navier Stokes equations, a cut-grid cell method for topography (that reduces error in computational gradient computation and anomalous vertical dispersion). Our hypothesis is that wet conditions will drive OLAM's simulations of precipitation to wetter conditions affecting both flood forecast and drought forecast. To test this hypothesis we simulate precipitation during identified historical flood events followed by drought events in the NHP (i.e. 2011-2012 years). We initialized OLAM with CFS-data 1-10 days previous to a flooding event (as initial conditions) to explore (1) short-term and high-resolution and (2) long-term and coarse-resolution simulations of flood and drought events, respectively. While floods are assessed during a maximum of 15-days refined-mesh simulations, drought is evaluated during the following 15 months. Simulated precipitation will be compared with the Sub-continental Observation Dataset, a gridded 1/16th degree resolution data obtained from climatological stations in Canada, US, and Mexico. This in-progress research will ultimately contribute to integrate OLAM and VIC models and improve predictability of extreme hydrometeorological events.

  13. A Tool for Low Noise Procedures Design and Community Noise Impact Assessment: The Rotorcraft Noise Model (RNM)

    NASA Technical Reports Server (NTRS)

    Conner, David A.; Page, Juliet A.

    2002-01-01

    To improve aircraft noise impact modeling capabilities and to provide a tool to aid in the development of low noise terminal area operations for rotorcraft and tiltrotors, the Rotorcraft Noise Model (RNM) was developed by the NASA Langley Research Center and Wyle Laboratories. RNM is a simulation program that predicts how sound will propagate through the atmosphere and accumulate at receiver locations located on flat ground or varying terrain, for single and multiple vehicle flight operations. At the core of RNM are the vehicle noise sources, input as sound hemispheres. As the vehicle "flies" along its prescribed flight trajectory, the source sound propagation is simulated and accumulated at the receiver locations (single points of interest or multiple grid points) in a systematic time-based manner. These sound signals at the receiver locations may then be analyzed to obtain single event footprints, integrated noise contours, time histories, or numerous other features. RNM may also be used to generate spectral time history data over a ground mesh for the creation of single event sound animation videos. Acoustic properties of the noise source(s) are defined in terms of sound hemispheres that may be obtained from theoretical predictions, wind tunnel experimental results, flight test measurements, or a combination of the three. The sound hemispheres may contain broadband data (source levels as a function of one-third octave band) and pure-tone data (in the form of specific frequency sound pressure levels and phase). A PC executable version of RNM is publicly available and has been adopted by a number of organizations for Environmental Impact Assessment studies of rotorcraft noise. This paper provides a review of the required input data, the theoretical framework of RNM's propagation model and the output results. Code validation results are provided from a NATO helicopter noise flight test as well as a tiltrotor flight test program that used the RNM as a tool to aid in the development of low noise approach profiles.

  14. New Perspectives on Long Run-out Rock Avalanches: A Dynamic Analysis of 20 Events in the Vaigat Strait, West Greenland

    NASA Astrophysics Data System (ADS)

    Benjamin, J.; Rosser, N. J.; Dunning, S.; Hardy, R. J.; Karim, K.; Szczucinski, W.; Norman, E. C.; Strzelecki, M.; Drewniak, M.

    2014-12-01

    Risk assessments of the threat posed by rock avalanches rely upon numerical modelling of potential run-out and spreading, and are contingent upon a thorough understanding of the flow dynamics inferred from deposits left by previous events. Few records exist of multiple rock avalanches with boundary conditions sufficiently consistent to develop a set of more generalised rules for behaviour across events. A unique cluster of 20 large (3 x 106 - 94 x 106 m3) rock avalanche deposits along the Vaigat Strait, West Greenland, offers a unique opportunity to model a large sample of adjacent events sourced from a stretch of coastal mountains of relatively uniform geology and structure. Our simulations of these events were performed using VolcFlow, a geophysical mass flow code developed to simulate volcanic debris avalanches. Rheological calibration of the model was performed using a well-constrained event at Paatuut (AD 2000). The best-fit simulation assumes a constant retarding stress with a collisional stress coefficient (T0 = 250 kPa, ξ = 0.01), and simulates run-out to within ±0.3% of that observed. Despite being widely used to simulate rock avalanche propagation, other models, that assume either a Coulomb frictional or a Voellmy rheology, failed to reproduce the observed event characteristics and deposit distribution at Paatuut. We applied this calibration to 19 other events, simulating rock avalanche motion across 3D terrain of varying levels of complexity. Our findings illustrate the utility and sensitivity of modelling a single rock avalanche satisfactorily as a function of rheology, alongside the validity of applying the same parameters elsewhere, even within similar boundary conditions. VolcFlow can plausibly account for the observed morphology of a series of deposits emplaced by events of different types, although its performance is sensitive to a range of topographic and geometric factors. These exercises show encouraging results in the model's ability to simulate a series of events using a single set of parameters obtained by back-analysis of the Paatuut event alone. The results also hold important implications for our process understanding of rock avalanches in confined fjord settings, where correctly modelling material flux at the point of entry into the water is critical in tsunami generation.

  15. Multiple Solutions of Real-time Tsunami Forecasting Using Short-term Inundation Forecasting for Tsunamis Tool

    NASA Astrophysics Data System (ADS)

    Gica, E.

    2016-12-01

    The Short-term Inundation Forecasting for Tsunamis (SIFT) tool, developed by NOAA Center for Tsunami Research (NCTR) at the Pacific Marine Environmental Laboratory (PMEL), is used in forecast operations at the Tsunami Warning Centers in Alaska and Hawaii. The SIFT tool relies on a pre-computed tsunami propagation database, real-time DART buoy data, and an inversion algorithm to define the tsunami source. The tsunami propagation database is composed of 50×100km unit sources, simulated basin-wide for at least 24 hours. Different combinations of unit sources, DART buoys, and length of real-time DART buoy data can generate a wide range of results within the defined tsunami source. For an inexperienced SIFT user, the primary challenge is to determine which solution, among multiple solutions for a single tsunami event, would provide the best forecast in real time. This study investigates how the use of different tsunami sources affects simulated tsunamis at tide gauge locations. Using the tide gauge at Hilo, Hawaii, a total of 50 possible solutions for the 2011 Tohoku tsunami are considered. Maximum tsunami wave amplitude and root mean square error results are used to compare tide gauge data and the simulated tsunami time series. Results of this study will facilitate SIFT users' efforts to determine if the simulated tide gauge tsunami time series from a specific tsunami source solution would be within the range of possible solutions. This study will serve as the basis for investigating more historical tsunami events and tide gauge locations.

  16. Simulation of the Tsunami Resulting from the M 9.2 2004 Sumatra-Andaman Earthquake - Dynamic Rupture vs. Seismic Inversion Source Model

    NASA Astrophysics Data System (ADS)

    Vater, Stefan; Behrens, Jörn

    2017-04-01

    Simulations of historic tsunami events such as the 2004 Sumatra or the 2011 Tohoku event are usually initialized using earthquake sources resulting from inversion of seismic data. Also, other data from ocean buoys etc. is sometimes included in the derivation of the source model. The associated tsunami event can often be well simulated in this way, and the results show high correlation with measured data. However, it is unclear how the derived source model compares to the particular earthquake event. In this study we use the results from dynamic rupture simulations obtained with SeisSol, a software package based on an ADER-DG discretization solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. The tsunami model is based on a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme on triangular grids and features a robust wetting and drying scheme for the simulation of inundation events at the coast. Adaptive mesh refinement enables the efficient computation of large domains, while at the same time it allows for high local resolution and geometric accuracy. The results are compared to measured data and results using earthquake sources based on inversion. With the approach of using the output of actual dynamic rupture simulations, we can estimate the influence of different earthquake parameters. Furthermore, the comparison to other source models enables a thorough comparison and validation of important tsunami parameters, such as the runup at the coast. This work is part of the ASCETE (Advanced Simulation of Coupled Earthquake and Tsunami Events) project, which aims at an improved understanding of the coupling between the earthquake and the generated tsunami event.

  17. Modeling Single-Event Transient Propagation in a SiGe BiCMOS Direct-Conversion Receiver

    NASA Astrophysics Data System (ADS)

    Ildefonso, Adrian; Song, Ickhyun; Tzintzarov, George N.; Fleetwood, Zachary E.; Lourenco, Nelson E.; Wachter, Mason T.; Cressler, John D.

    2017-08-01

    The propagation of single-event transient (SET) signals in a silicon-germanium direct-conversion receiver carrying modulated data is explored. A theoretical analysis of transient propagation, verified by simulation, is presented. A new methodology to characterize and quantify the impact of SETs in communication systems carrying modulated data is proposed. The proposed methodology uses a pulsed radiation source to induce distortions in the signal constellation. The error vector magnitude due to SETs can then be calculated to quantify errors. Two different modulation schemes were simulated: QPSK and 16-QAM. The distortions in the constellation diagram agree with the presented circuit theory. Furthermore, the proposed methodology was applied to evaluate the improvements in the SET response due to a known radiation-hardening-by-design (RHBD) technique, where the common-base device of the low-noise amplifier was operated in inverse mode. The proposed methodology can be a valid technique to determine the most sensitive parts of a system carrying modulated data.

  18. Medium-energy heavy-ion single-event-burnout imaging of power MOSFETs

    NASA Astrophysics Data System (ADS)

    Musseau, O.; Torres, A.; Campbell, A. B.; Knudson, A. R.; Buchner, S.; Fischer, B.; Schlogl, M.; Briand, P.

    1999-12-01

    We present the first experimental determination of the SEB sensitive area in a power MOSFET irradiated with a high-LET heavy-ion microbeam. We used a spectroscopy technique to perform coincident measurements of the charge collected in both source and drain junctions together, with a nondestructive technique (current limitation). The resulting charge collection images are related to the physical structure of the individual cells. These experimental data reveal the complex 3-dimensional behavior of a real structure, which can not easily be simulated using available tools. As the drain voltage is increased, the onset of burnout is reached, characterized by a sudden change in the charge collection image. "Hot spots" are observed where the collected charge reaches its maximum value. Those spots, due to burnout triggering events, correspond to areas where the silicon is degraded through thermal effects along a single ion track. This direct observation of SEB sensitive areas as applications for, either device hardening, by modifying doping profiles or layout of the cells, or for code calibration and device simulation.

  19. Joint independent component analysis for simultaneous EEG-fMRI: principle and simulation.

    PubMed

    Moosmann, Matthias; Eichele, Tom; Nordby, Helge; Hugdahl, Kenneth; Calhoun, Vince D

    2008-03-01

    An optimized scheme for the fusion of electroencephalography and event related potentials with functional magnetic resonance imaging (BOLD-fMRI) data should simultaneously assess all available electrophysiologic and hemodynamic information in a common data space. In doing so, it should be possible to identify features of latent neural sources whose trial-to-trial dynamics are jointly reflected in both modalities. We present a joint independent component analysis (jICA) model for analysis of simultaneous single trial EEG-fMRI measurements from multiple subjects. We outline the general idea underlying the jICA approach and present results from simulated data under realistic noise conditions. Our results indicate that this approach is a feasible and physiologically plausible data-driven way to achieve spatiotemporal mapping of event related responses in the human brain.

  20. Collision-Induced Dissociation of Electrosprayed NaCl Clusters: Using Molecular Dynamics Simulations to Visualize Reaction Cascades in the Gas Phase

    NASA Astrophysics Data System (ADS)

    Schachel, Tilo D.; Metwally, Haidy; Popa, Vlad; Konermann, Lars

    2016-11-01

    Infusion of NaCl solutions into an electrospray ionization (ESI) source produces [Na( n+1)Cl n ]+ and other gaseous clusters. The n = 4, 13, 22 magic number species have cuboid ground state structures and exhibit elevated abundance in ESI mass spectra. Relatively few details are known regarding the mechanisms whereby these clusters undergo collision-induced dissociation (CID). The current study examines to what extent molecular dynamics (MD) simulations can be used to garner insights into the sequence of events taking place during CID. Experiments on singly charged clusters reveal that the loss of small neutrals is the dominant fragmentation pathway. MD simulations indicate that the clusters undergo extensive structural fluctuations prior to decomposition. Consistent with the experimentally observed behavior, most of the simulated dissociation events culminate in ejection of small neutrals ([NaCl] i , with i = 1, 2, 3). The MD data reveal that the prevalence of these dissociation channels is linked to the presence of short-lived intermediates where a relatively compact core structure carries a small [NaCl] i protrusion. The latter can separate from the parent cluster via cleavage of a single Na-Cl contact. Fragmentation events of this type are kinetically favored over other dissociation channels that would require the quasi-simultaneous rupture of multiple electrostatic contacts. The CID behavior of NaCl cluster ions bears interesting analogies to that of collisionally activated protein complexes. Overall, it appears that MD simulations represent a valuable tool for deciphering the dissociation of noncovalently bound systems in the gas phase.

  1. Layout-aware simulation of soft errors in sub-100 nm integrated circuits

    NASA Astrophysics Data System (ADS)

    Balbekov, A.; Gorbunov, M.; Bobkov, S.

    2016-12-01

    Single Event Transient (SET) caused by charged particle traveling through the sensitive volume of integral circuit (IC) may lead to different errors in digital circuits in some cases. In technologies below 180 nm, a single particle can affect multiple devices causing multiple SET. This fact adds the complexity to fault tolerant devices design, because the schematic design techniques become useless without their layout consideration. The most common layout mitigation technique is a spatial separation of sensitive nodes of hardened circuits. Spatial separation decreases the circuit performance and increases power consumption. Spacing should thus be reasonable and its scaling follows the device dimensions' scaling trend. This paper presents the development of the SET simulation approach comprised of SPICE simulation with "double exponent" current source as SET model. The technique uses layout in GDSII format to locate nearby devices that can be affected by a single particle and that can share the generated charge. The developed software tool automatizes multiple simulations and gathers the produced data to present it as the sensitivity map. The examples of conducted simulations of fault tolerant cells and their sensitivity maps are presented in this paper.

  2. Landquake dynamics inferred from seismic source inversion: Greenland and Sichuan events of 2017

    NASA Astrophysics Data System (ADS)

    Chao, W. A.

    2017-12-01

    In June 2017 two catastrophic landquake events occurred in Greenland and Sichuan. The Greenland event leads to tsunami hazard in the small town of Nuugaarsiaq. A landquake in Sichuan hit the town, which resulted in over 100 death. Both two events generated the strong seismic signals recorded by the real-time global seismic network. I adopt an inversion algorithm to derive the landquake force time history (LFH) using the long-period waveforms, and the landslide volume ( 76 million m3) can be rapidly estimated, facilitating the tsunami-wave modeling for early warning purpose. Based on an integrated approach involving tsunami forward simulation and seismic waveform inversion, this study has significant implications to issuing actionable warnings before hazardous tsunami waves strike populated areas. Two single-forces (SFs) mechanism (two block model) yields the best explanation for Sichuan event, which demonstrates that secondary event (seismic inferred volume: 8.2 million m3) may be mobilized by collapse-mass hitting from initial rock avalanches ( 5.8 million m3), likely causing a catastrophic disaster. The later source with a force magnitude of 0.9967×1011 N occurred 70 seconds after first mass-movement occurrence. In contrast, first event has the smaller force magnitude of 0.8116×1011 N. In conclusion, seismically inferred physical parameters will substantially contribute to improving our understanding of landquake source mechanisms and mitigating similar hazards in other parts of the world.

  3. Medium-energy heavy-ion single-event-burnout imaging of power MOSFETs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musseau, O.; Torres, A.; Campbell, A.B.

    The authors present the first experimental determination of the SEB sensitive area in a power MOSFET irradiated with a high-LET heavy-ion microbeam. They used a spectroscopy technique to perform coincident measurements of the charge collected in both source and drain junctions together, with a non-destructive technique (current limitation). The resulting charge collection images are related to the physical structure of the individual cells. These experimental data reveal the complex 3-dimensional behavior of a real structure, which can not easily be simulated using available tools. As the drain voltage is increased, the onset of burnout is reached, characterized by a suddenmore » change in the charge collection image. Hot spots are observed where the collected charge reaches its maximum value. Those spots, due to burnout triggering events, correspond to areas where the silicon is degraded through thermal effects along a single ion track. This direct observation of SEB sensitive areas as applications for, either device hardening, by modifying doping profiles or layout of the cells, or for code calibration and device simulation.« less

  4. [Estimation of urban non-point source pollution loading and its factor analysis in the Pearl River Delta].

    PubMed

    Liao, Yi-Shan; Zhuo, Mu-Ning; Li, Ding-Qiang; Guo, Tai-Long

    2013-08-01

    In the Pearl Delta region, urban rivers have been seriously polluted, and the input of non-point source pollution materials, such as chemical oxygen demand (COD), into rivers cannot be neglected. During 2009-2010, the water qualities at eight different catchments in the Fenjiang River of Foshan city were monitored, and the COD loads for eight rivulet sewages were calculated in respect of different rainfall conditions. Interesting results were concluded in our paper. The rainfall and landuse type played important roles in the COD loading, with greater influence of rainfall than landuse type. Consequently, a COD loading formula was constructed that was defined as a function of runoff and landuse type that were derived SCS model and land use map. Loading of COD could be evaluated and predicted with the constructed formula. The mean simulation accuracy for single rainfall event was 75.51%. Long-term simulation accuracy was better than that of single rainfall. In 2009, the estimated COD loading and its loading intensity were 8 053 t and 339 kg x (hm2 x a)(-1), and the industrial land was regarded as the main source of COD pollution area. The severe non-point source pollution such as COD in Fenjiang River must be paid more attention in the future.

  5. The Seismicity of the Central Apennines Region Studied by Means of a Physics-Based Earthquake Simulator

    NASA Astrophysics Data System (ADS)

    Console, R.; Vannoli, P.; Carluccio, R.

    2016-12-01

    The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation. The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation.

  6. Evaluation of PET Imaging Resolution Using 350 mu{m} Pixelated CZT as a VP-PET Insert Detector

    NASA Astrophysics Data System (ADS)

    Yin, Yongzhi; Chen, Ximeng; Li, Chongzheng; Wu, Heyu; Komarov, Sergey; Guo, Qingzhen; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan

    2014-02-01

    A cadmium-zinc-telluride (CZT) detector with 350 μm pitch pixels was studied in high-resolution positron emission tomography (PET) imaging applications. The PET imaging system was based on coincidence detection between a CZT detector and a lutetium oxyorthosilicate (LSO)-based Inveon PET detector in virtual-pinhole PET geometry. The LSO detector is a 20 ×20 array, with 1.6 mm pitches, and 10 mm thickness. The CZT detector uses ac 20 ×20 ×5 mm substrate, with 350 μm pitch pixelated anodes and a coplanar cathode. A NEMA NU4 Na-22 point source of 250 μm in diameter was imaged by this system. Experiments show that the image resolution of single-pixel photopeak events was 590 μm FWHM while the image resolution of double-pixel photopeak events was 640 μm FWHM. The inclusion of double-pixel full-energy events increased the sensitivity of the imaging system. To validate the imaging experiment, we conducted a Monte Carlo (MC) simulation for the same PET system in Geant4 Application for Emission Tomography. We defined LSO detectors as a scanner ring and 350 μm pixelated CZT detectors as an insert ring. GATE simulated coincidence data were sorted into an insert-scanner sinogram and reconstructed. The image resolution of MC-simulated data (which did not factor in positron range and acolinearity effect) was 460 μm at FWHM for single-pixel events. The image resolutions of experimental data, MC simulated data, and theoretical calculation are all close to 500 μm FWHM when the proposed 350 μm pixelated CZT detector is used as a PET insert. The interpolation algorithm for the charge sharing events was also investigated. The PET image that was reconstructed using the interpolation algorithm shows improved image resolution compared with the image resolution without interpolation algorithm.

  7. Design and simulation of ion optics for ion sources for production of singly charged ions

    NASA Astrophysics Data System (ADS)

    Zelenak, A.; Bogomolov, S. L.

    2004-05-01

    During the last 2 years different types of the singly charged ion sources were developed for FLNR (JINR) new projects such as Dubna radioactive ion beams, (Phase I and Phase II), the production of the tritium ion beam and the MASHA mass separator. The ion optics simulations for 2.45 GHz electron cyclotron resonance source, rf source, and the plasma ion source were performed. In this article the design and simulation results of the optics of new ion sources are presented. The results of simulation are compared with measurements obtained during the experiments.

  8. MCViNE- An object oriented Monte Carlo neutron ray tracing simulation package

    DOE PAGES

    Lin, J. Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; ...

    2015-11-28

    MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiplemore » scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. As a result, with simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.« less

  9. Microdose Induced Drain Leakage Effects in Power Trench MOSFETs: Experiment and Modeling

    NASA Astrophysics Data System (ADS)

    Zebrev, Gennady I.; Vatuev, Alexander S.; Useinov, Rustem G.; Emeliyanov, Vladimir V.; Anashin, Vasily S.; Gorbunov, Maxim S.; Turin, Valentin O.; Yesenkov, Kirill A.

    2014-08-01

    We study experimentally and theoretically the micro-dose induced drain-source leakage current in the trench power MOSFETs under irradiation with high-LET heavy ions. We found experimentally that cumulative increase of leakage current occurs by means of stochastic spikes corresponding to a strike of single heavy ion into the MOSFET gate oxide. We simulate this effect with the proposed analytic model allowing to describe (including Monte Carlo methods) both the deterministic (cumulative dose) and stochastic (single event) aspects of the problem. Based on this model the survival probability assessment in space heavy ion environment with high LETs was proposed.

  10. Run-up Variability due to Source Effects

    NASA Astrophysics Data System (ADS)

    Del Giudice, Tania; Zolezzi, Francesca; Traverso, Chiara; Valfrè, Giulio; Poggi, Pamela; Parker, Eric J.

    2010-05-01

    This paper investigates the variability of tsunami run-up at a specific location due to uncertainty in earthquake source parameters. It is important to quantify this 'inter-event' variability for probabilistic assessments of tsunami hazard. In principal, this aspect of variability could be studied by comparing field observations at a single location from a number of tsunamigenic events caused by the same source. As such an extensive dataset does not exist, we decided to study the inter-event variability through numerical modelling. We attempt to answer the question 'What is the potential variability of tsunami wave run-up at a specific site, for a given magnitude earthquake occurring at a known location'. The uncertainty is expected to arise from the lack of knowledge regarding the specific details of the fault rupture 'source' parameters. The following steps were followed: the statistical distributions of the main earthquake source parameters affecting the tsunami height were established by studying fault plane solutions of known earthquakes; a case study based on a possible tsunami impact on Egypt coast has been set up and simulated, varying the geometrical parameters of the source; simulation results have been analyzed deriving relationships between run-up height and source parameters; using the derived relationships a Monte Carlo simulation has been performed in order to create the necessary dataset to investigate the inter-event variability of the run-up height along the coast; the inter-event variability of the run-up height along the coast has been investigated. Given the distribution of source parameters and their variability, we studied how this variability propagates to the run-up height, using the Cornell 'Multi-grid coupled Tsunami Model' (COMCOT). The case study was based on the large thrust faulting offshore the south-western Greek coast, thought to have been responsible for the infamous 1303 tsunami. Numerical modelling of the event was used to assess the impact on the North African coast. The effects of uncertainty in fault parameters were assessed by perturbing the base model, and observing variation on wave height along the coast. The tsunami wave run-up was computed at 4020 locations along the Egyptian coast between longitudes 28.7 E and 33.8 E. To assess the effects of fault parameters uncertainty, input model parameters have been varied and effects on run-up have been analyzed. The simulations show that for a given point there are linear relationships between run-up and both fault dislocation and rupture length. A superposition analysis shows that a linear combination of the effects of the different source parameters (evaluated results) leads to a good approximation of the simulated results. This relationship is then used as the basis for a Monte Carlo simulation. The Monte Carlo simulation was performed for 1600 scenarios at each of the 4020 points along the coast. The coefficient of variation (the ratio between standard deviation of the results and the average of the run-up heights along the coast) is comprised between 0.14 and 3.11 with an average value along the coast equal to 0.67. The coefficient of variation of normalized run-up has been compared with the standard deviation of spectral acceleration attenuation laws used for probabilistic seismic hazard assessment studies. These values have a similar meaning, and the uncertainty in the two cases is similar. The 'rule of thumb' relationship between mean and sigma can be expressed as follows: ?+ σ ≈ 2?. The implication is that the uncertainty in run-up estimation should give a range of values within approximately two times the average. This uncertainty should be considered in tsunami hazard analysis, such as inundation and risk maps, evacuation plans and the other related steps.

  11. The source of infrasound associated with long-period events at mount St. Helens

    USGS Publications Warehouse

    Matoza, R.S.; Garces, M.A.; Chouet, B.A.; D'Auria, L.; Hedlin, M.A.H.; De Groot-Hedlin, C.; Waite, G.P.

    2009-01-01

    During the early stages of the 2004-2008 Mount St. Helens eruption, the source process that produced a sustained sequence of repetitive long-period (LP) seismic events also produced impulsive broadband infrasonic signals in the atmosphere. To assess whether the signals could be generated simply by seismic-acoustic coupling from the shallow LP events, we perform finite difference simulation of the seismo-acoustic wavefield using a single numerical scheme for the elastic ground and atmosphere. The effects of topography, velocity structure, wind, and source configuration are considered. The simulations show that a shallow source buried in a homogeneous elastic solid produces a complex wave train in the atmosphere consisting of P/SV and Rayleigh wave energy converted locally along the propagation path, and acoustic energy originating from , the source epicenter. Although the horizontal acoustic velocity of the latter is consistent with our data, the modeled amplitude ratios of pressure to vertical seismic velocity are too low in comparison with observations, and the characteristic differences in seismic and acoustic waveforms and spectra cannot be reproduced from a common point source. The observations therefore require a more complex source process in which the infrasonic signals are a record of only the broadband pressure excitation mechanism of the seismic LP events. The observations and numerical results can be explained by a model involving the repeated rapid pressure loss from a hydrothermal crack by venting into a shallow layer of loosely consolidated, highly permeable material. Heating by magmatic activity causes pressure to rise, periodically reaching the pressure threshold for rupture of the "valve" sealing the crack. Sudden opening of the valve generates the broadband infrasonic signal and simultaneously triggers the collapse of the crack, initiating resonance of the remaining fluid. Subtle waveform and amplitude variability of the infrasonic signals as recorded at an array 13.4 km to the NW of the volcano are attributed primarily to atmospheric boundary layer propagation effects, superimposed upon amplitude changes at the source. Copyright 2009 by the American Geophysical Union.

  12. Ion Velocity Distributions in Dipolarization Events: Beams in the Vicinity of the Plasma Sheet Boundary

    NASA Technical Reports Server (NTRS)

    Birn, J.; Chandler, M.; Moore, T.; Runov, A.

    2017-01-01

    Using combined MHD/test particle simulations, we further explore characteristic ion velocity distributions in relation to magnetotail reconnection and dipolarization events, focusing on distributions at and near the plasma sheet boundary layer (PSBL). Simulated distributions right at the boundary are characterized by a single earthward beam, as discussed earlier. However, farther inside, the distributions consist of multiple beams parallel and antiparallel to the magnetic field, remarkably similar to recent Magnetospheric Multiscale observations. The simulations provide insight into the mechanisms: the lowest earthward beam results from direct acceleration at an earthward propagating dipolarization front (DF), with a return beam at somewhat higher energy. A higher-energy earthward beam results from dual acceleration, first near the reconnection site and then at the DF, again with a corresponding return beam resulting from mirroring closer to Earth. Multiple acceleration at the X line or the propagating DF with intermediate bounces may produce even higher-energy beams. Particles contributing to the lower energy beams are found to originate from the PSBL with thermal source energies, increasing with increasing beam energy. In contrast, the highest-energy beams consist mostly of particles that have entered the acceleration region via cross-tail drift with source energies in the suprathermal range.

  13. Ion velocity distributions in dipolarization events: Beams in the vicinity of the plasma sheet boundary

    NASA Astrophysics Data System (ADS)

    Birn, J.; Chandler, M.; Moore, T.; Runov, A.

    2017-08-01

    Using combined MHD/test particle simulations, we further explore characteristic ion velocity distributions in relation to magnetotail reconnection and dipolarization events, focusing on distributions at and near the plasma sheet boundary layer (PSBL). Simulated distributions right at the boundary are characterized by a single earthward beam, as discussed earlier. However, farther inside, the distributions consist of multiple beams parallel and antiparallel to the magnetic field, remarkably similar to recent Magnetospheric Multiscale observations. The simulations provide insight into the mechanisms: the lowest earthward beam results from direct acceleration at an earthward propagating dipolarization front (DF), with a return beam at somewhat higher energy. A higher-energy earthward beam results from dual acceleration, first near the reconnection site and then at the DF, again with a corresponding return beam resulting from mirroring closer to Earth. Multiple acceleration at the X line or the propagating DF with intermediate bounces may produce even higher-energy beams. Particles contributing to the lower energy beams are found to originate from the PSBL with thermal source energies, increasing with increasing beam energy. In contrast, the highest-energy beams consist mostly of particles that have entered the acceleration region via cross-tail drift with source energies in the suprathermal range.

  14. Structural Heterogeneity and Quantitative FRET Efficiency Distributions of Polyprolines through a Hybrid Atomistic Simulation and Monte Carlo Approach

    PubMed Central

    Hoefling, Martin; Lima, Nicola; Haenni, Dominik; Seidel, Claus A. M.; Schuler, Benjamin; Grubmüller, Helmut

    2011-01-01

    Förster Resonance Energy Transfer (FRET) experiments probe molecular distances via distance dependent energy transfer from an excited donor dye to an acceptor dye. Single molecule experiments not only probe average distances, but also distance distributions or even fluctuations, and thus provide a powerful tool to study biomolecular structure and dynamics. However, the measured energy transfer efficiency depends not only on the distance between the dyes, but also on their mutual orientation, which is typically inaccessible to experiments. Thus, assumptions on the orientation distributions and averages are usually made, limiting the accuracy of the distance distributions extracted from FRET experiments. Here, we demonstrate that by combining single molecule FRET experiments with the mutual dye orientation statistics obtained from Molecular Dynamics (MD) simulations, improved estimates of distances and distributions are obtained. From the simulated time-dependent mutual orientations, FRET efficiencies are calculated and the full statistics of individual photon absorption, energy transfer, and photon emission events is obtained from subsequent Monte Carlo (MC) simulations of the FRET kinetics. All recorded emission events are collected to bursts from which efficiency distributions are calculated in close resemblance to the actual FRET experiment, taking shot noise fully into account. Using polyproline chains with attached Alexa 488 and Alexa 594 dyes as a test system, we demonstrate the feasibility of this approach by direct comparison to experimental data. We identified cis-isomers and different static local environments as sources of the experimentally observed heterogeneity. Reconstructions of distance distributions from experimental data at different levels of theory demonstrate how the respective underlying assumptions and approximations affect the obtained accuracy. Our results show that dye fluctuations obtained from MD simulations, combined with MC single photon kinetics, provide a versatile tool to improve the accuracy of distance distributions that can be extracted from measured single molecule FRET efficiencies. PMID:21629703

  15. NeuroMatic: An Integrated Open-Source Software Toolkit for Acquisition, Analysis and Simulation of Electrophysiological Data

    PubMed Central

    Rothman, Jason S.; Silver, R. Angus

    2018-01-01

    Acquisition, analysis and simulation of electrophysiological properties of the nervous system require multiple software packages. This makes it difficult to conserve experimental metadata and track the analysis performed. It also complicates certain experimental approaches such as online analysis. To address this, we developed NeuroMatic, an open-source software toolkit that performs data acquisition (episodic, continuous and triggered recordings), data analysis (spike rasters, spontaneous event detection, curve fitting, stationarity) and simulations (stochastic synaptic transmission, synaptic short-term plasticity, integrate-and-fire and Hodgkin-Huxley-like single-compartment models). The merging of a wide range of tools into a single package facilitates a more integrated style of research, from the development of online analysis functions during data acquisition, to the simulation of synaptic conductance trains during dynamic-clamp experiments. Moreover, NeuroMatic has the advantage of working within Igor Pro, a platform-independent environment that includes an extensive library of built-in functions, a history window for reviewing the user's workflow and the ability to produce publication-quality graphics. Since its original release, NeuroMatic has been used in a wide range of scientific studies and its user base has grown considerably. NeuroMatic version 3.0 can be found at http://www.neuromatic.thinkrandom.com and https://github.com/SilverLabUCL/NeuroMatic. PMID:29670519

  16. Did a slump source cause the 1929 Grand Banks tsunami?

    NASA Astrophysics Data System (ADS)

    Løvholt, F.; Schulten, I.; Mosher, D.; Harbitz, C. B.; Krastel, S.

    2017-12-01

    On November 18, 1929, a Mw 7.2 earthquake occurred beneath the upper Laurentian Fan, south of Newfoundland. The earthquake displaced about 100 km3 of sediment volume that rapidly evolved into a turbidity current revealed by a series of successive telecommunication cable breaks. A tsunami with fatal consequences along the south coast of Newfoundland also resulted. This tsunami is attributed to sediment mass failure as no seafloor displacement due to the earthquake is observed or expected. Although sidescan sonar, sub-bottom profiler and modern multibeam data show surficial sediment slumping and translational slide activity in the upper part of the slope, no major headscarp, single evacuation area or large mass transport deposit are observed. Sediment mass failure has been interpreted as broadly distributed and shallow, likely occurring in a retrogressive fashion. The question remained, therefore, as to how such complex failure kinematics could generate a tsunami. The Grand Banks tsunami is the only landslide tsunami for which traces are found at transoceanic distances. Despite being a landmark event, only a couple of attempts to model the tsunami exist. None of these have been able to match tsunami observations. Recently acquired seismic reflection data suggest that rotational slumping of a thick sediment mass ( 500 m) on the St. Pierre Slope may have occurred, causing seafloor displacements (fault traces) up to 100 m in height. The previously mapped surficial failures were a consequence of slumping of the thicker mass. Here, we simulate tsunami generation using the new geophysical information to construct different tsunamigenic slump sources. In addition, we undertake simulations assuming a flowing surficial landslide. The numerical simulations shows that its large and rapid vertical displacements render the slump source more tsunamigenic than the alternative surficial landslide. The simulations using the slump source roughly complies with observations of large run-ups on the Burin Peninsula along the south coast of Newfoundland, in contrast to previous modelling attempts. As the source extensions complies with new observations of rotational failures at the slope, the simulations suggest that a slump source is the most likely explanation for the large tsunami observations due to the Grand Banks event.

  17. Single-photon technique for the detection of periodic extraterrestrial laser pulses.

    PubMed

    Leeb, W R; Poppe, A; Hammel, E; Alves, J; Brunner, M; Meingast, S

    2013-06-01

    To draw humankind's attention to its existence, an extraterrestrial civilization could well direct periodic laser pulses toward Earth. We developed a technique capable of detecting a quasi-periodic light signal with an average of less than one photon per pulse within a measurement time of a few tens of milliseconds in the presence of the radiation emitted by an exoplanet's host star. Each of the electronic events produced by one or more single-photon avalanche detectors is tagged with precise time-of-arrival information and stored. From this we compute a histogram displaying the frequency of event-time differences in classes with bin widths on the order of a nanosecond. The existence of periodic laser pulses manifests itself in histogram peaks regularly spaced at multiples of the-a priori unknown-pulse repetition frequency. With laser sources simulating both the pulse source and the background radiation, we tested a detection system in the laboratory at a wavelength of 850 nm. We present histograms obtained from various recorded data sequences with the number of photons per pulse, the background photons per pulse period, and the recording time as main parameters. We then simulated a periodic signal hypothetically generated on a planet orbiting a G2V-type star (distance to Earth 500 light-years) and show that the technique is capable of detecting the signal even if the received pulses carry as little as one photon on average on top of the star's background light.

  18. Multi-anode wire two dimensional proportional counter for detecting Iron-55 X-Ray Radiation

    NASA Astrophysics Data System (ADS)

    Weston, Michael William James

    Radiation detectors in many applications use small sensor areas or large tubes which only collect one-dimensional information. There are some applications that require analyzing a large area and locating specific elements such as contamination on the heat tiles of a space shuttle or features on historical artifacts. The process can be time consuming and scanning a large area in a single pass is beneficial. The use of a two dimensional multi-wire proportional counter provides a large detection window presenting positional information in a single pass. This thesis described the design and implementation of an experimental detector to evaluate a specific design intended for use as a handheld instrument. The main effort of this research was to custom build a detector for testing purposes. The aluminum chamber and all circuit boards were custom designed and built specifically for this application. Various software and programmable logic algorithms were designed to analyze the raw data in real time and attempted to determine what data was useful and what could be discarded. The research presented here provides results useful for designing an improved second generation detector in the future. With the anode wire spacing chosen and the minimal collimation of the radiation source, detected events occurred all over the detection grid at any time. The raw event data did not make determining the source position easy and further data correlation was required. An abundance of samples had multiple wire hits which were not useful because it falsely reported the source to be all over the place and at different energy levels. By narrowing down the results to only use the largest signal pairs on different axes in each event, a much more accurate analysis of where the source existed above the grid was determined. The basic principle and construction method was shown to work, however the gas selection, geometry and anode wire constructs proved to be poor. To provide a system optimized for a specific application would require detailed Monte Carlo simulations. These simulation results together with the details and techniques implemented in this thesis would provide a final instrument of much higher accuracy.

  19. Anthology of the Development of Radiation Transport Tools as Applied to Single Event Effects

    NASA Astrophysics Data System (ADS)

    Reed, R. A.; Weller, R. A.; Akkerman, A.; Barak, J.; Culpepper, W.; Duzellier, S.; Foster, C.; Gaillardin, M.; Hubert, G.; Jordan, T.; Jun, I.; Koontz, S.; Lei, F.; McNulty, P.; Mendenhall, M. H.; Murat, M.; Nieminen, P.; O'Neill, P.; Raine, M.; Reddell, B.; Saigné, F.; Santin, G.; Sihver, L.; Tang, H. H. K.; Truscott, P. R.; Wrobel, F.

    2013-06-01

    This anthology contains contributions from eleven different groups, each developing and/or applying Monte Carlo-based radiation transport tools to simulate a variety of effects that result from energy transferred to a semiconductor material by a single particle event. The topics span from basic mechanisms for single-particle induced failures to applied tasks like developing websites to predict on-orbit single event failure rates using Monte Carlo radiation transport tools.

  20. Methane and Environmental Change during the Paleocene-Eocene Thermal Maximum (PETM): Modeling the PETM Onset as a Two-stage Event

    NASA Technical Reports Server (NTRS)

    Carozza, David A.; Mysak, Lawrence A.; Schmidt, Gavin A.

    2011-01-01

    An atmospheric CH4 box model coupled to a global carbon cycle box model is used to constrain the carbon emission associated with the PETM and assess the role of CH4 during this event. A range of atmospheric and oceanic emission scenarios representing different amounts, rates, and isotopic signatures of emitted carbon are used to model the PETM onset. The first 3 kyr of the onset, a pre-isotope excursion stage, is simulated by the atmospheric release of 900 to 1100 Pg C CH4 with a delta C-13 of -22 to - 30 %. For a global average warming of 3 deg C, a release of CO2 to the ocean and CH4 to the atmosphere totalling 900 to 1400 Pg C, with a delta C-13 of -50 to -60%, simulates the subsequent 1 -kyr isotope excursion stage. To explain the observations, the carbon must have been released over at most 500 years. The first stage results cannot be associated with any known PETM hypothesis. However, the second stage results are consistent with a methane hydrate source. More than a single source of carbon is required to explain the PETM onset.

  1. Prediction of Intensity Change Subsequent to Concentric Eyewall Events

    NASA Astrophysics Data System (ADS)

    Mauk, Rachel Grant

    Concentric eyewall events have been documented numerous times in intense tropical cyclones over the last two decades. During a concentric eyewall event, an outer (secondary) eyewall forms around the inner (primary) eyewall. Improved instrumentation on aircraft and satellites greatly increases the likelihood of detecting an event. Despite the increased ability to detect such events, forecasts of intensity changes during and after these events remain poor. When concentric eyewall events occur near land, accurate intensity change predictions are especially critical to ensure proper emergency preparations and staging of recovery assets. A nineteen-year (1997-2015) database of concentric eyewall events is developed by analyzing microwave satellite imagery, aircraft- and land-based radar, and other published documents. Events are identified in both the North Atlantic and eastern North Pacific basins. TCs are categorized as single (1 event), serial (>= 2 events) and super-serial (>= 3 events). Key findings here include distinct spatial patterns for single and serial Atlantic TCs, a broad seasonal distribution for eastern North Pacific TCs, and apparent ENSO-related variability in both basins. The intensity change subsequent to the concentric eyewall event is calculated from the HURDAT2 database at time points relative to the start and to the end of the event. Intensity change is then categorized as Weaken (≤ -10 kt), Maintain (+/- 5 kt), and Strengthen (≥ 10 kt). Environmental conditions in which each event occurred are analyzed based on the SHIPS diagnostic files. Oceanic, dynamic, thermodynamic, and TC status predictors are selected for testing in a multiple discriminant analysis procedure to determine which variables successfully discriminate the intensity change category and the occurrence of additional concentric eyewall events. Intensity models are created for 12 h, 24 h, 36 h, and 48 h after the concentric eyewall events end. Leave-one-out cross validation is performed on each set of discriminators to generate classifications, which are then compared to observations. For each model, the top combinations achieve 80-95% overall accuracy in classifying TCs based on the environmental characteristics, although Maintain systems are frequently misclassified. The third part of this dissertation employs the Weather Research and Forecasting model to further investigate concentric eyewall events. Two serial Atlantic concentric eyewall cases (Katrina 2005 and Wilma 2005) are selected from the original study set, and WRF simulations performed using several model designs. Despite strong evidence from multiple sources that serial concentric eyewalls formed in both hurricanes, the WRF simulations did not produce identifiable concentric eyewall structures for Katrina, and only transient structures for Wilma. Possible reasons for the lack of concentric eyewall formation are discussed, including model resolution, microphysics, and data sources.

  2. Discretely Integrated Condition Event (DICE) Simulation for Pharmacoeconomics.

    PubMed

    Caro, J Jaime

    2016-07-01

    Several decision-analytic modeling techniques are in use for pharmacoeconomic analyses. Discretely integrated condition event (DICE) simulation is proposed as a unifying approach that has been deliberately designed to meet the modeling requirements in a straightforward transparent way, without forcing assumptions (e.g., only one transition per cycle) or unnecessary complexity. At the core of DICE are conditions that represent aspects that persist over time. They have levels that can change and many may coexist. Events reflect instantaneous occurrences that may modify some conditions or the timing of other events. The conditions are discretely integrated with events by updating their levels at those times. Profiles of determinant values allow for differences among patients in the predictors of the disease course. Any number of valuations (e.g., utility, cost, willingness-to-pay) of conditions and events can be applied concurrently in a single run. A DICE model is conveniently specified in a series of tables that follow a consistent format and the simulation can be implemented fully in MS Excel, facilitating review and validation. DICE incorporates both state-transition (Markov) models and non-resource-constrained discrete event simulation in a single formulation; it can be executed as a cohort or a microsimulation; and deterministically or stochastically.

  3. Ground-motion signature of dynamic ruptures on rough faults

    NASA Astrophysics Data System (ADS)

    Mai, P. Martin; Galis, Martin; Thingbaijam, Kiran K. S.; Vyas, Jagdish C.

    2016-04-01

    Natural earthquakes occur on faults characterized by large-scale segmentation and small-scale roughness. This multi-scale geometrical complexity controls the dynamic rupture process, and hence strongly affects the radiated seismic waves and near-field shaking. For a fault system with given segmentation, the question arises what are the conditions for producing large-magnitude multi-segment ruptures, as opposed to smaller single-segment events. Similarly, for variable degrees of roughness, ruptures may be arrested prematurely or may break the entire fault. In addition, fault roughness induces rupture incoherence that determines the level of high-frequency radiation. Using HPC-enabled dynamic-rupture simulations, we generate physically self-consistent rough-fault earthquake scenarios (M~6.8) and their associated near-source seismic radiation. Because these computations are too expensive to be conducted routinely for simulation-based seismic hazard assessment, we thrive to develop an effective pseudo-dynamic source characterization that produces (almost) the same ground-motion characteristics. Therefore, we examine how variable degrees of fault roughness affect rupture properties and the seismic wavefield, and develop a planar-fault kinematic source representation that emulates the observed dynamic behaviour. We propose an effective workflow for improved pseudo-dynamic source modelling that incorporates rough-fault effects and its associated high-frequency radiation in broadband ground-motion computation for simulation-based seismic hazard assessment.

  4. Sensitivity booster for DOI-PET scanner by utilizing Compton scattering events between detector blocks

    NASA Astrophysics Data System (ADS)

    Yoshida, Eiji; Tashima, Hideaki; Yamaya, Taiga

    2014-11-01

    In a conventional PET scanner, coincidence events are measured with a limited energy window for detection of photoelectric events in order to reject Compton scatter events that occur in a patient, but Compton scatter events caused in detector crystals are also rejected. Scatter events within the patient causes scatter coincidences, but inter crystal scattering (ICS) events have useful information for determining an activity distribution. Some researchers have reported the feasibility of PET scanners based on a Compton camera for tracing ICS into the detector. However, these scanners require expensive semiconductor detectors for high-energy resolution. In the Anger-type block detector, single photons interacting with multiple detectors can be obtained for each interacting position and complete information can be gotten just as for photoelectric events in the single detector. ICS events in the single detector have been used to get coincidence, but single photons interacting with multiple detectors have not been used to get coincidence. In this work, we evaluated effect of sensitivity improvement using Compton kinetics in several types of DOI-PET scanners. The proposed method promises to improve the sensitivity using coincidence events of single photons interacting with multiple detectors, which are identified as the first interaction (FI). FI estimation accuracy can be improved to determine FI validity from the correlation between Compton scatter angles calculated on the coincidence line-of-response. We simulated an animal PET scanner consisting of 42 detectors. Each detector block consists of three types of scintillator crystals (LSO, GSO and GAGG). After the simulation, coincidence events are added as information for several depth-of-interaction (DOI) resolutions. From the simulation results, we concluded the proposed method promises to improve the sensitivity considerably when effective atomic number of a scintillator is low. Also, we showed that FI estimate accuracy is improved, as DOI resolution is high.

  5. Candidate Binary Microlensing Events from the MACHO Project

    NASA Astrophysics Data System (ADS)

    Becker, A. C.; Alcock, C.; Allsman, R. A.; Alves, D. R.; Axelrod, T. S.; Bennett, D. P.; Cook, K. H.; Drake, A. J.; Freeman, K. C.; Griest, K.; King, L. J.; Lehner, M. J.; Marshall, S. L.; Minniti, D.; Peterson, B. A.; Popowski, P.; Pratt, M. R.; Quinn, P. J.; Rodgers, A. W.; Stubbs, C. W.; Sutherland, W.; Tomaney, A.; Vandehei, T.; Welch, D. L.; Baines, D.; Brakel, A.; Crook, B.; Howard, J.; Leach, T.; McDowell, D.; McKeown, S.; Mitchell, J.; Moreland, J.; Pozza, E.; Purcell, P.; Ring, S.; Salmon, A.; Ward, K.; Wyper, G.; Heller, A.; Kaspi, S.; Kovo, O.; Maoz, D.; Retter, A.; Rhie, S. H.; Stetson, P.; Walker, A.; MACHO Collaboration

    1998-12-01

    We present the lightcurves of 22 gravitational microlensing events from the first six years of the MACHO Project gravitational microlensing survey which are likely examples of lensing by binary systems. These events were selected from a total sample of ~ 300 events which were either detected by the MACHO Alert System or discovered through retrospective analyses of the MACHO database. Many of these events appear to have undergone a caustic or cusp crossing, and 2 of the events are well fit with lensing by binary systems with large mass ratios, indicating secondary companions of approximately planetary mass. The event rate is roughly consistent with predictions based upon our knowledge of the properties of binary stars. The utility of binary lensing in helping to solve the Galactic dark matter problem is demonstrated with analyses of 3 binary microlensing events seen towards the Magellanic Clouds. Source star resolution during caustic crossings in 2 of these events allows us to estimate the location of the lensing systems, assuming each source is a single star and not a short period binary. * MACHO LMC-9 appears to be a binary lensing event with a caustic crossing partially resolved in 2 observations. The resulting lens proper motion appears too small for a single source and LMC disk lens. However, it is considerably less likely to be a single source star and Galactic halo lens. We estimate the a priori probability of a short period binary source with a detectable binary character to be ~ 10 %. If the source is also a binary, then we currently have no constraints on the lens location. * The most recent of these events, MACHO 98-SMC-1, was detected in real-time. Follow-up observations by the MACHO/GMAN, PLANET, MPS, EROS and OGLE microlensing collaborations lead to the robust conclusion that the lens likely resides in the SMC.

  6. ELVES Research at the Pierre Auger Observatory: Optical Emission Simulation and Time Evolution, WWLLN-LIS-Auger Correlations, and Double ELVES Observations and Simulation.

    NASA Astrophysics Data System (ADS)

    Merenda, K. D.

    2016-12-01

    Since 2013, the Pierre Auger Cosmic Ray Observatory in Mendoza, Argentina, extended its trigger algorithm to detect emissions of light consistent with the signature from very low frequency perturbations due to electromagnetic pulse sources (ELVES). Correlations with the World Wide Lightning Location Network (WWLLN), the Lightning Imaging Sensor (LIS) and simulated events were used to assess the quality of the reconstructed data. The FD is a pixel array telescope sensitive to the deep UV emissions of ELVES. The detector provides the finest time resolution of 100 nanoseconds ever applied to the study of ELVES. Four eyes, separated by approximately 40 kilometers, consist of six telescopes and span a total of 360 degrees of azimuth angle. The detector operates at night when storms are not in the field of view. An existing 3D EMP Model solves Maxwell's equations using a three dimensional finite-difference time-domain model to describe the propagation of electromagnetic pulses from lightning sources to the ionosphere. The simulation also provides a projection of the resulting ELVES onto the pixel array of the FD. A full reconstruction of simulated events is under development. We introduce the analog signal time evolution comparison between Auger reconstructed data and simulated events on individual FD pixels. In conjunction, we will present a study of the angular distribution of light emission around the vertical and above the causative lightning source. We will also contrast, with Monte Carlo, Auger double ELVES events separated by at most 5 microseconds. These events are too short to be explained by multiple return strokes, ground reflections, or compact intra-cloud lightning sources. Reconstructed ELVES data is 40% correlated to WWLLN data and an analysis with the LIS database is underway.

  7. Simulations of X-ray diffraction of shock-compressed single-crystal tantalum with synchrotron undulator sources.

    PubMed

    Tang, M X; Zhang, Y Y; E, J C; Luo, S N

    2018-05-01

    Polychromatic synchrotron undulator X-ray sources are useful for ultrafast single-crystal diffraction under shock compression. Here, simulations of X-ray diffraction of shock-compressed single-crystal tantalum with realistic undulator sources are reported, based on large-scale molecular dynamics simulations. Purely elastic deformation, elastic-plastic two-wave structure, and severe plastic deformation under different impact velocities are explored, as well as an edge release case. Transmission-mode diffraction simulations consider crystallographic orientation, loading direction, incident beam direction, X-ray spectrum bandwidth and realistic detector size. Diffraction patterns and reciprocal space nodes are obtained from atomic configurations for different loading (elastic and plastic) and detection conditions, and interpretation of the diffraction patterns is discussed.

  8. Simulations of X-ray diffraction of shock-compressed single-crystal tantalum with synchrotron undulator sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, M. X.; Zhang, Y. Y.; E, J. C.

    Polychromatic synchrotron undulator X-ray sources are useful for ultrafast single-crystal diffraction under shock compression. Here, simulations of X-ray diffraction of shock-compressed single-crystal tantalum with realistic undulator sources are reported, based on large-scale molecular dynamics simulations. Purely elastic deformation, elastic–plastic two-wave structure, and severe plastic deformation under different impact velocities are explored, as well as an edge release case. Transmission-mode diffraction simulations consider crystallographic orientation, loading direction, incident beam direction, X-ray spectrum bandwidth and realistic detector size. Diffraction patterns and reciprocal space nodes are obtained from atomic configurations for different loading (elastic and plastic) and detection conditions, and interpretation of themore » diffraction patterns is discussed.« less

  9. Alternative source models of very low frequency events

    NASA Astrophysics Data System (ADS)

    Gomberg, J.; Agnew, D. C.; Schwartz, S. Y.

    2016-09-01

    We present alternative source models for very low frequency (VLF) events, previously inferred to be radiation from individual slow earthquakes that partly fill the period range between slow slip events lasting thousands of seconds and low-frequency earthquakes (LFE) with durations of tenths of a second. We show that VLF events may emerge from bandpass filtering a sum of clustered, shorter duration, LFE signals, believed to be the components of tectonic tremor. Most published studies show VLF events occurring concurrently with tremor bursts and LFE signals. Our analysis of continuous data from Costa Rica detected VLF events only when tremor was also occurring, which was only 7% of the total time examined. Using analytic and synthetic models, we show that a cluster of LFE signals produces the distinguishing characteristics of VLF events, which may be determined by the cluster envelope. The envelope may be diagnostic of a single, dynamic, slowly slipping event that propagates coherently over kilometers or represents a narrowly band-passed version of nearly simultaneous arrivals of radiation from slip on multiple higher stress drop and/or faster propagating slip patches with dimensions of tens of meters (i.e., LFE sources). Temporally clustered LFE sources may be triggered by single or multiple distinct aseismic slip events or represent the nearly simultaneous chance occurrence of background LFEs. Given the nonuniqueness in possible source durations, we suggest it is premature to draw conclusions about VLF event sources or how they scale.

  10. Alternative source models of very low frequency events

    USGS Publications Warehouse

    Gomberg, Joan S.; Agnew, D.C.; Schwartz, S.Y.

    2016-01-01

    We present alternative source models for very low frequency (VLF) events, previously inferred to be radiation from individual slow earthquakes that partly fill the period range between slow slip events lasting thousands of seconds and low-frequency earthquakes (LFE) with durations of tenths of a second. We show that VLF events may emerge from bandpass filtering a sum of clustered, shorter duration, LFE signals, believed to be the components of tectonic tremor. Most published studies show VLF events occurring concurrently with tremor bursts and LFE signals. Our analysis of continuous data from Costa Rica detected VLF events only when tremor was also occurring, which was only 7% of the total time examined. Using analytic and synthetic models, we show that a cluster of LFE signals produces the distinguishing characteristics of VLF events, which may be determined by the cluster envelope. The envelope may be diagnostic of a single, dynamic, slowly slipping event that propagates coherently over kilometers or represents a narrowly band-passed version of nearly simultaneous arrivals of radiation from slip on multiple higher stress drop and/or faster propagating slip patches with dimensions of tens of meters (i.e., LFE sources). Temporally clustered LFE sources may be triggered by single or multiple distinct aseismic slip events or represent the nearly simultaneous chance occurrence of background LFEs. Given the nonuniqueness in possible source durations, we suggest it is premature to draw conclusions about VLF event sources or how they scale.

  11. Surface Dimming by the 2013 Rim Fire Simulated by a Sectional Aerosol Model

    NASA Technical Reports Server (NTRS)

    Yu, Pengfei; Toon, Owen B.; Bardeen, Charles G; Bucholtz, Anthony; Rosenlof, Karen; Saide, Pablo E.; Da Silva, Arlindo M.; Ziemba, Luke D.; Thornhill, Kenneth L.; Jimenez, Jose-Luis; hide

    2016-01-01

    The Rim Fire of 2013, the third largest area burned by fire recorded in California history, is simulated by a climate model coupled with a size-resolved aerosol model. Modeled aerosol mass, number and particle size distribution are within variability of data obtained from multiple airborne in-situ measurements. Simulations suggest Rim Fire smoke may block 4-6 of sunlight energy reaching the surface, with a dimming efficiency around 120-150 W m(exp -2) per unit aerosol optical depth in the mid-visible at 13:00-15:00 local time. Underestimation of simulated smoke single scattering albedo at mid-visible by 0.04 suggests the model overestimates either the particle size or the absorption due to black carbon. This study shows that exceptional events like the 2013 Rim Fire can be simulated by a climate model with one-degree resolution with overall good skill, though that resolution is still not sufficient to resolve the smoke peak near the source region.

  12. Surface dimming by the 2013 Rim Fire simulated by a sectional aerosol model.

    PubMed

    Yu, Pengfei; Toon, Owen B; Bardeen, Charles G; Bucholtz, Anthony; Rosenlof, Karen H; Saide, Pablo E; Da Silva, Arlindo; Ziemba, Luke D; Thornhill, Kenneth L; Jimenez, Jose-Luis; Campuzano-Jost, Pedro; Schwarz, Joshua P; Perring, Anne E; Froyd, Karl D; Wagner, N L; Mills, Michael J; Reid, Jeffrey S

    2016-06-27

    The Rim Fire of 2013, the third largest area burned by fire recorded in California history, is simulated by a climate model coupled with a size-resolved aerosol model. Modeled aerosol mass, number, and particle size distribution are within variability of data obtained from multiple-airborne in situ measurements. Simulations suggest that Rim Fire smoke may block 4-6% of sunlight energy reaching the surface, with a dimming efficiency around 120-150 W m -2 per unit aerosol optical depth in the midvisible at 13:00-15:00 local time. Underestimation of simulated smoke single scattering albedo at midvisible by 0.04 suggests that the model overestimates either the particle size or the absorption due to black carbon. This study shows that exceptional events like the 2013 Rim Fire can be simulated by a climate model with 1° resolution with overall good skill, although that resolution is still not sufficient to resolve the smoke peak near the source region.

  13. New Methodologies Applied to Seismic Hazard Assessment in Southern Calabria (Italy)

    NASA Astrophysics Data System (ADS)

    Console, R.; Chiappini, M.; Speranza, F.; Carluccio, R.; Greco, M.

    2016-12-01

    Although it is generally recognized that the M7+ 1783 and 1908 Calabria earthquakes were caused by normal faults rupturing the upper crust of the southern Calabria-Peloritani area, no consensus exists on seismogenic source location and orientation. A recent high-resolution low-altitude aeromagnetic survey of southern Calabria and Messina straits suggested that the sources of the 1783 and 1908 earthquakes are en echelon faults belonging to the same NW dipping normal fault system straddling the whole southern Calabria. The application of a newly developed physics-based earthquake simulator to the active fault system modeled by the data obtained from the aeromagnetic survey and other recent geological studies has allowed the production of catalogs lasting 100,000 years and containing more than 25,000 events of magnitudes ≥ 4.0. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate due to tectonic loading for every single segment in the investigated fault system, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one segment are allowed to expand into neighboring segments, if they are separated by a given maximum range of distance. The application of our simulation algorithm to Calabria region provides typical features in time, space and magnitude behaviour of the seismicity, which can be compared with those of the real observations. These features include long-term pseudo-periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the moderate and higher magnitude range. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law has been applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation. These maps can be compared with the existing hazard maps that are presently used in the national seismic building regulations.

  14. Simulation of diatomic gas-wall interaction and accommodation coefficients for negative ion sources and accelerators.

    PubMed

    Sartori, E; Brescaccin, L; Serianni, G

    2016-02-01

    Particle-wall interactions determine in different ways the operating conditions of plasma sources, ion accelerators, and beams operating in vacuum. For instance, a contribution to gas heating is given by ion neutralization at walls; beam losses and stray particle production-detrimental for high current negative ion systems such as beam sources for fusion-are caused by collisional processes with residual gas, with the gas density profile that is determined by the scattering of neutral particles at the walls. This paper shows that Molecular Dynamics (MD) studies at the nano-scale can provide accommodation parameters for gas-wall interactions, such as the momentum accommodation coefficient and energy accommodation coefficient: in non-isothermal flows (such as the neutral gas in the accelerator, coming from the plasma source), these affect the gas density gradients and influence efficiency and losses in particular of negative ion accelerators. For ideal surfaces, the computation also provides the angular distribution of scattered particles. Classical MD method has been applied to the case of diatomic hydrogen molecules. Single collision events, against a frozen wall or a fully thermal lattice, have been simulated by using probe molecules. Different modelling approximations are compared.

  15. Simulation of diatomic gas-wall interaction and accommodation coefficients for negative ion sources and accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sartori, E., E-mail: emanuele.sartori@igi.cnr.it; Serianni, G.; Brescaccin, L.

    2016-02-15

    Particle-wall interactions determine in different ways the operating conditions of plasma sources, ion accelerators, and beams operating in vacuum. For instance, a contribution to gas heating is given by ion neutralization at walls; beam losses and stray particle production—detrimental for high current negative ion systems such as beam sources for fusion—are caused by collisional processes with residual gas, with the gas density profile that is determined by the scattering of neutral particles at the walls. This paper shows that Molecular Dynamics (MD) studies at the nano-scale can provide accommodation parameters for gas-wall interactions, such as the momentum accommodation coefficient andmore » energy accommodation coefficient: in non-isothermal flows (such as the neutral gas in the accelerator, coming from the plasma source), these affect the gas density gradients and influence efficiency and losses in particular of negative ion accelerators. For ideal surfaces, the computation also provides the angular distribution of scattered particles. Classical MD method has been applied to the case of diatomic hydrogen molecules. Single collision events, against a frozen wall or a fully thermal lattice, have been simulated by using probe molecules. Different modelling approximations are compared.« less

  16. A generalization of the double-corner-frequency source spectral model and its use in the SCEC BBP validation exercise

    USGS Publications Warehouse

    Boore, David M.; Di Alessandro, Carola; Abrahamson, Norman A.

    2014-01-01

    The stochastic method of simulating ground motions requires the specification of the shape and scaling with magnitude of the source spectrum. The spectral models commonly used are either single-corner-frequency or double-corner-frequency models, but the latter have no flexibility to vary the high-frequency spectral levels for a specified seismic moment. Two generalized double-corner-frequency ω2 source spectral models are introduced, one in which two spectra are multiplied together, and another where they are added. Both models have a low-frequency dependence controlled by the seismic moment, and a high-frequency spectral level controlled by the seismic moment and a stress parameter. A wide range of spectral shapes can be obtained from these generalized spectral models, which makes them suitable for inversions of data to obtain spectral models that can be used in ground-motion simulations in situations where adequate data are not available for purely empirical determinations of ground motions, as in stable continental regions. As an example of the use of the generalized source spectral models, data from up to 40 stations from seven events, plus response spectra at two distances and two magnitudes from recent ground-motion prediction equations, were inverted to obtain the parameters controlling the spectral shapes, as well as a finite-fault factor that is used in point-source, stochastic-method simulations of ground motion. The fits to the data are comparable to or even better than those from finite-fault simulations, even for sites close to large earthquakes.

  17. Source process of a long-period event at Kilauea volcano, Hawaii

    USGS Publications Warehouse

    Kumagai, H.; Chouet, B.A.; Dawson, P.B.

    2005-01-01

    We analyse a long-period (LP) event observed by a dense seismic network temporarily operated at Kilauea volcano, Hawaii, in 1996. We systematically perform spectral analyses, waveform inversions and forward modeling of the LP event to quantify its source process. Spectral analyses identify two dominant spectral frequencies at 0.6 and 1.3 Hz with associated Q values in the range 10-20. Results from waveform inversions assuming six moment-tensor and three single-force components point to the resonance of a horizontal crack located at a depth of approximately 150 m near the northeastern rim of the Halemaumau pit crater. Waveform simulations based on a fluid-filled crack model suggest that the observed frequencies and Q values can be explained by a crack filled with a hydrothermal fluid in the form of either bubbly water or steam. The shallow hydrothermal crack located directly above the magma conduit may have been heated by volcanic gases leaking from the conduit. The enhanced flux of heat raised the overall pressure of the hydrothermal fluid in the crack and induced a rapid discharge of fluid from the crack, which triggered the acoustic vibrations of the resonator generating the LP waveform. The present study provides further support to the idea that LP events originate in the resonance of a crack. ?? 2005 RAS.

  18. A source model of the 2014 South Napa Earthquake by the EGF broad-band strong ground motion simulation

    NASA Astrophysics Data System (ADS)

    Iwata, T.; Asano, K.; Kubo, H.

    2014-12-01

    The source model of the 2014 South Napa earthquake (Mw6.0) is estimated using broad band strong ground motion simulation by the empirical Green's function method (Irikura, 1986, Irikura et al., 1997). We used the CESMD strong motion data. Aftershock ground motion records of Mw3.6 which occurred at 05:33 on 24th August (PDT), are used as an empirical Green's function. We refer to the finite source model by Dreger et al. (2014) for setting the geometry of the source fault plane and the rupture velocity. We assume a single rectangular strong motion generation area (e.g. Miyake et al., 2003; Asano and Iwata, 2012). The seismic moment ratio between the target and EGF events is fixed from the moment magnitudes. As only five station data are available for the aftershock records, the size of SMGA area, rupture starting point, and the rise time on the SMGA are determined by the trial and error. Preliminary SMGA model is 6x6km2 and the rupture mainly propagates WNW and shallower directions. The SMGA size we obtained follows the empirical relationship of Mw and SMGA size for the inland crustal events (Irikura and Miyake, 2011). Waveform fittings are fairly well at the near source station NHC (Huichica creek) and 68150 (Napa Collage), where as the fitting is not good at the south-side stations, 68206 (Crockett - Carquinez Br. Geotech Array) and 68310 (Vallejo - Hwy 37/Napa River E Geo. Array). Particularly, we did not succeed in explaining the high PGA at the 68206 surface station. We will try to improve our SMGA model and will discuss the origin of the high PGA observed at that station.

  19. Replicable Interprofessional Competency Outcomes from High-Volume, Inter-Institutional, Interprofessional Simulation

    PubMed Central

    Bambini, Deborah; Emery, Matthew; de Voest, Margaret; Meny, Lisa; Shoemaker, Michael J.

    2016-01-01

    There are significant limitations among the few prior studies that have examined the development and implementation of interprofessional education (IPE) experiences to accommodate a high volume of students from several disciplines and from different institutions. The present study addressed these gaps by seeking to determine the extent to which a single, large, inter-institutional, and IPE simulation event improves student perceptions of the importance and relevance of IPE and simulation as a learning modality, whether there is a difference in students’ perceptions among disciplines, and whether the results are reproducible. A total of 290 medical, nursing, pharmacy, and physical therapy students participated in one of two large, inter-institutional, IPE simulation events. Measurements included student perceptions about their simulation experience using the Attitude Towards Teamwork in Training Undergoing Designed Educational Simulation (ATTITUDES) Questionnaire and open-ended questions related to teamwork and communication. Results demonstrated a statistically significant improvement across all ATTITUDES subscales, while time management, role confusion, collaboration, and mutual support emerged as significant themes. Results of the present study indicate that a single IPE simulation event can reproducibly result in significant and educationally meaningful improvements in student perceptions towards teamwork, IPE, and simulation as a learning modality. PMID:28970407

  20. Synthetic Seismograms of Explosive Sources Calculated by the Earth Simulator

    NASA Astrophysics Data System (ADS)

    Tsuboi, S.; Matsumoto, H.; Rozhkov, M.; Stachnik, J.

    2017-12-01

    We calculate broadband synthetic seismograms using the spectral-element method (Komatitsch & Tromp, 2001) for recent explosive events in northern Korean peninsula. We use supercomputer Earth Simulator system in JAMSTEC to compute synthetic seismograms using the spectral-element method. The simulations are performed on 8,100 processors, which require 2,025 nodes of the Earth Simulator. We use one chunk with the angular distance 40 degrees to compute synthetic seismograms. On this number of nodes, a simulation of 5 minutes of wave propagation accurate at periods of 1.5 seconds and longer requires about 10 hours of CPU time. We use CMT solution of Rozhkov et al (2016) as a source model for this event. One example of CMT solution for this source model has 28% double couple component and 51% isotropic component. The hypocenter depth of this solution is 1.4 km. Comparisons of the synthetic waveforms with the observation show that the arrival time of Pn and Pg waves matches well with the observation. Comparison also shows that the agreement of amplitude of other phases is not necessarily well, which demonstrates that the crustal structure should be improved to include in the simulation. The surface waves observed are also modeled well in the synthetics, which shows that the CMT solution we have used for this computation correctly grasps the source characteristics of this event. Because of characteristics of artificial explosive sources of which hypocenter location is already known, we may evaluate crustal structure along the propagation path from the waveform modeling for these sources. We may discuss the limitation of one dimensional crustal structure model by comparing the synthetic waveform of 3D crustal structure and the observed seismograms.

  1. Eruption dynamics at Mount St. Helens imaged from broadband seismic waveforms: Interaction of the shallow magmatic and hydrothermal systems

    USGS Publications Warehouse

    Waite, G.P.; Chouet, B.A.; Dawson, P.B.

    2008-01-01

    The current eruption at Mount St. Helens is characterized by dome building and shallow, repetitive, long-period (LP) earthquakes. Waveform cross-correlation reveals remarkable similarity for a majority of the earthquakes over periods of several weeks. Stacked spectra of these events display multiple peaks between 0.5 and 2 Hz that are common to most stations. Lower-amplitude very-long-period (VLP) events commonly accompany the LP events. We model the source mechanisms of LP and VLP events in the 0.5-4 s and 8-40 s bands, respectively, using data recorded in July 2005 with a 19-station temporary broadband network. The source mechanism of the LP events includes: 1) a volumetric component modeled as resonance of a gently NNW-dipping, steam-filled crack located directly beneath the actively extruding part of the new dome and within 100 m of the crater floor and 2) a vertical single force attributed to movement of the overlying dome. The VLP source, which also includes volumetric and single-force components, is 250 m deeper and NNW of the LP source, at the SW edge of the 1980s lava dome. The volumetric component points to the compression and expansion of a shallow, magma-filled sill, which is subparallel to the hydrothermal crack imaged at the LP source, coupled with a smaller component of expansion and compression of a dike. The single-force components are due to mass advection in the magma conduit. The location, geometry and timing of the sources suggest the VLP and LP events are caused by perturbations of a common crack system.

  2. Empirical Modeling Of Single-Event Upset

    NASA Technical Reports Server (NTRS)

    Zoutendyk, John A.; Smith, Lawrence S.; Soli, George A.; Thieberger, Peter; Smith, Stephen L.; Atwood, Gregory E.

    1988-01-01

    Experimental study presents examples of empirical modeling of single-event upset in negatively-doped-source/drain metal-oxide-semiconductor static random-access memory cells. Data supports adoption of simplified worst-case model in which cross sectionof SEU by ion above threshold energy equals area of memory cell.

  3. Urban nonpoint source pollution buildup and washoff models for simulating storm runoff quality in the Los Angeles County.

    PubMed

    Wang, Long; Wei, Jiahua; Huang, Yuefei; Wang, Guangqian; Maqsood, Imran

    2011-07-01

    Many urban nonpoint source pollution models utilize pollutant buildup and washoff functions to simulate storm runoff quality of urban catchments. In this paper, two urban pollutant washoff load models are derived using pollutant buildup and washoff functions. The first model assumes that there is no residual pollutant after a storm event while the second one assumes that there is always residual pollutant after each storm event. The developed models are calibrated and verified with observed data from an urban catchment in the Los Angeles County. The application results show that the developed model with consideration of residual pollutant is more capable of simulating nonpoint source pollution from urban storm runoff than that without consideration of residual pollutant. For the study area, residual pollutant should be considered in pollutant buildup and washoff functions for simulating urban nonpoint source pollution when the total runoff volume is less than 30 mm. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. The development of a simulation model of the treatment of coronary heart disease.

    PubMed

    Cooper, Keith; Davies, Ruth; Roderick, Paul; Chase, Debbie; Raftery, James

    2002-11-01

    A discrete event simulation models the progress of patients who have had a coronary event, through their treatment pathways and subsequent coronary events. The main risk factors in the model are age, sex, history of previous events and the extent of the coronary vessel disease. The model parameters are based on data collected from epidemiological studies of incidence and prognosis, efficacy studies. national surveys and treatment audits. The simulation results were validated against different sources of data. The initial results show that increasing revascularisation has considerable implications for resource use but has little impact on patient mortality.

  5. Stochastic simulation tools and continuum models for describing two-dimensional collective cell spreading with universal growth functions

    NASA Astrophysics Data System (ADS)

    Jin, Wang; Penington, Catherine J.; McCue, Scott W.; Simpson, Matthew J.

    2016-10-01

    Two-dimensional collective cell migration assays are used to study cancer and tissue repair. These assays involve combined cell migration and cell proliferation processes, both of which are modulated by cell-to-cell crowding. Previous discrete models of collective cell migration assays involve a nearest-neighbour proliferation mechanism where crowding effects are incorporated by aborting potential proliferation events if the randomly chosen target site is occupied. There are two limitations of this traditional approach: (i) it seems unreasonable to abort a potential proliferation event based on the occupancy of a single, randomly chosen target site; and, (ii) the continuum limit description of this mechanism leads to the standard logistic growth function, but some experimental evidence suggests that cells do not always proliferate logistically. Motivated by these observations, we introduce a generalised proliferation mechanism which allows non-nearest neighbour proliferation events to take place over a template of r≥slant 1 concentric rings of lattice sites. Further, the decision to abort potential proliferation events is made using a crowding function, f(C), which accounts for the density of agents within a group of sites rather than dealing with the occupancy of a single randomly chosen site. Analysing the continuum limit description of the stochastic model shows that the standard logistic source term, λ C(1-C), where λ is the proliferation rate, is generalised to a universal growth function, λ C f(C). Comparing the solution of the continuum description with averaged simulation data indicates that the continuum model performs well for many choices of f(C) and r. For nonlinear f(C), the quality of the continuum-discrete match increases with r.

  6. Stochastic simulation tools and continuum models for describing two-dimensional collective cell spreading with universal growth functions.

    PubMed

    Jin, Wang; Penington, Catherine J; McCue, Scott W; Simpson, Matthew J

    2016-10-07

    Two-dimensional collective cell migration assays are used to study cancer and tissue repair. These assays involve combined cell migration and cell proliferation processes, both of which are modulated by cell-to-cell crowding. Previous discrete models of collective cell migration assays involve a nearest-neighbour proliferation mechanism where crowding effects are incorporated by aborting potential proliferation events if the randomly chosen target site is occupied. There are two limitations of this traditional approach: (i) it seems unreasonable to abort a potential proliferation event based on the occupancy of a single, randomly chosen target site; and, (ii) the continuum limit description of this mechanism leads to the standard logistic growth function, but some experimental evidence suggests that cells do not always proliferate logistically. Motivated by these observations, we introduce a generalised proliferation mechanism which allows non-nearest neighbour proliferation events to take place over a template of [Formula: see text] concentric rings of lattice sites. Further, the decision to abort potential proliferation events is made using a crowding function, f(C), which accounts for the density of agents within a group of sites rather than dealing with the occupancy of a single randomly chosen site. Analysing the continuum limit description of the stochastic model shows that the standard logistic source term, [Formula: see text], where λ is the proliferation rate, is generalised to a universal growth function, [Formula: see text]. Comparing the solution of the continuum description with averaged simulation data indicates that the continuum model performs well for many choices of f(C) and r. For nonlinear f(C), the quality of the continuum-discrete match increases with r.

  7. A comprehensive simulation framework for imaging single particles and biomolecules at the European X-ray Free-Electron Laser

    NASA Astrophysics Data System (ADS)

    Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.; Samoylova, Liubov; Buzmakov, Alexey; Jurek, Zoltan; Ziaja, Beata; Santra, Robin; Loh, N. Duane; Tschentscher, Thomas; Mancuso, Adrian P.

    2016-04-01

    The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy and incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. We demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design.

  8. Effects of cosmic rays on single event upsets

    NASA Technical Reports Server (NTRS)

    Venable, D. D.; Zajic, V.; Lowe, C. W.; Olidapupo, A.; Fogarty, T. N.

    1989-01-01

    Assistance was provided to the Brookhaven Single Event Upset (SEU) Test Facility. Computer codes were developed for fragmentation and secondary radiation affecting Very Large Scale Integration (VLSI) in space. A computer controlled CV (HP4192) test was developed for Terman analysis. Also developed were high speed parametric tests which are independent of operator judgment and a charge pumping technique for measurement of D(sub it) (E). The X-ray secondary effects, and parametric degradation as a function of dose rate were simulated. The SPICE simulation of static RAMs with various resistor filters was tested.

  9. Design and Performance of a Triple Source Air Mass Zero Solar Simulator

    NASA Technical Reports Server (NTRS)

    Jenkins, Phillip; Scheiman, David; Snyder, David

    2005-01-01

    Simulating the sun in a laboratory for the purpose of measuring solar cells has long been a challenge for engineers and scientists. Multi-junction cells demand higher fidelity of a solar simulator than do single junction cells, due to a need for close spectral matching as well as AM0 intensity. A GaInP/GaAs/Ge solar cell for example, requires spectral matching in three distinct spectral bands (figure 1). A commercial single source high-pressure xenon arc solar simulator such as the Spectrolab X-25 at NASA Glenn Research Center, can match the top two junctions of a GaInP/GaAs/Ge cell to within 1.3% mismatch, with the GaAs cell receiving slightly more current than required. The Ge bottom cell however, is mismatched +8.8%. Multi source simulators are designed to match the current for all junctions but typically have small illuminated areas, less uniformity and less beam collimation compared to an X-25 simulator. It was our intent when designing a multi source simulator to preserve as many aspects of the X-25 while adding multi-source capability.

  10. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    NASA Astrophysics Data System (ADS)

    Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.; Bagby, L.; Baller, B.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Bugel, L.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; James, C.; de Vries, J. Jan; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Jones, B. J. P.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; von Rohr, C. Rudolf; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van de Water, R. G.; Viren, B.; Weber, M.; Weston, J.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2017-03-01

    We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.

  11. Analyzing Single-Event Gate Ruptures In Power MOSFET's

    NASA Technical Reports Server (NTRS)

    Zoutendyk, John A.

    1993-01-01

    Susceptibilities of power metal-oxide/semiconductor field-effect transistors (MOSFET's) to single-event gate ruptures analyzed by exposing devices to beams of energetic bromine ions while applying appropriate bias voltages to source, gate, and drain terminals and measuring current flowing into or out of each terminal.

  12. Source Characteristics of the Northern Longitudinal Valley, Taiwan Derived from Broadband Strong-Motion Simulation

    NASA Astrophysics Data System (ADS)

    Wen, Yi-Ying

    2018-02-01

    The 2014 M L 5.9 Fanglin earthquake occurred at the northern end of the aftershock distribution of the 2013 M L 6.4 Ruisui event and caused strong ground shaking and some damage in the northern part of the Longitudinal Valley. We carried out the strong-motion simulation of the 2014 Fanglin event in the broadband frequency range (0.4-10 Hz) using the empirical Green's function method and then integrated the source models to investigate the source characteristics of the 2013 Ruisui and 2014 Fanglin events. The results show that the dimension of strong motion generation area of the 2013 Ruisui event is smaller, whereas that of the 2014 Fanglin event is comparable with the empirical estimation of inland crustal earthquakes, which indicates the different faulting behaviors. Furthermore, the localized high PGV patch might be caused by the radiation energy amplified by the local low-velocity structure in the northern Longitudinal Valley. Additional study issues are required for building up the knowledge of the potential seismic hazard related to moderate-large events for various seismogenic areas in Taiwan.

  13. Modeling of ESD events from polymeric surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pfeifer, Kent Bryant

    2014-03-01

    Transient electrostatic discharge (ESD) events are studied to assemble a predictive model of discharge from polymer surfaces. An analog circuit simulation is produced and its response is compared to various literature sources to explore its capabilities and limitations. Results suggest that polymer ESD events can be predicted to within an order of magnitude. These results compare well to empirical findings from other sources having similar reproducibility.

  14. Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry

    NASA Technical Reports Server (NTRS)

    Brown, Denise L.; Bunoz, Jean-Philippe; Gay, Robert

    2012-01-01

    The Exploration Flight Test 1 (EFT-1) mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on on-board altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. The error sources for the barometric altimeters are not independent, and many error sources result in bias in a specific direction. Therefore conventional error budget methods could not be applied. Instead, high fidelity Monte-Carlo simulation was performed and error bounds were determined based on the results of this analysis. Aerodynamic errors were the largest single contributor to the error budget for the barometric altimeters. The large errors drove a change to the altitude trigger setpoint for FBC jettison deploy.

  15. Laser Scanner Tests For Single-Event Upsets

    NASA Technical Reports Server (NTRS)

    Kim, Quiesup; Soli, George A.; Schwartz, Harvey R.

    1992-01-01

    Microelectronic advanced laser scanner (MEALS) is opto/electro/mechanical apparatus for nondestructive testing of integrated memory circuits, logic circuits, and other microelectronic devices. Multipurpose diagnostic system used to determine ultrafast time response, leakage, latchup, and electrical overstress. Used to simulate some of effects of heavy ions accelerated to high energies to determine susceptibility of digital device to single-event upsets.

  16. Application of the WEPS and SWEEP models to non-agricultural disturbed lands.

    PubMed

    Tatarko, J; van Donk, S J; Ascough, J C; Walker, D G

    2016-12-01

    Wind erosion not only affects agricultural productivity but also soil, air, and water quality. Dust and specifically particulate matter ≤10 μm (PM-10) has adverse effects on respiratory health and also reduces visibility along roadways, resulting in auto accidents. The Wind Erosion Prediction System (WEPS) was developed by the USDA-Agricultural Research Service to simulate wind erosion and provide for conservation planning on cultivated agricultural lands. A companion product, known as the Single-Event Wind Erosion Evaluation Program (SWEEP), has also been developed which consists of the stand-alone WEPS erosion submodel combined with a graphical interface to simulate soil loss from single (i.e., daily) wind storm events. In addition to agricultural lands, wind driven dust emissions also occur from other anthropogenic sources such as construction sites, mined and reclaimed areas, landfills, and other disturbed lands. Although developed for agricultural fields, WEPS and SWEEP are useful tools for simulating erosion by wind for non-agricultural lands where typical agricultural practices are not employed. On disturbed lands, WEPS can be applied for simulating long-term (i.e., multi-year) erosion control strategies. SWEEP on the other hand was developed specifically for disturbed lands and can simulate potential soil loss for site- and date-specific planned surface conditions and control practices. This paper presents novel applications of WEPS and SWEEP for developing erosion control strategies on non-agricultural disturbed lands. Erosion control planning with WEPS and SWEEP using water and other dust suppressants, wind barriers, straw mulch, re-vegetation, and other management practices is demonstrated herein through the use of comparative simulation scenarios. The scenarios confirm the efficacy of the WEPS and SWEEP models as valuable tools for supporting the design of erosion control plans for disturbed lands that are not only cost-effective but also incorporate a science-based approach to risk assessment.

  17. Estimation of Dynamic Friction Process of the Akatani Landslide Based on the Waveform Inversion and Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Yamada, M.; Mangeney, A.; Moretti, L.; Matsushi, Y.

    2014-12-01

    Understanding physical parameters, such as frictional coefficients, velocity change, and dynamic history, is important issue for assessing and managing the risks posed by deep-seated catastrophic landslides. Previously, landslide motion has been inferred qualitatively from topographic changes caused by the event, and occasionally from eyewitness reports. However, these conventional approaches are unable to evaluate source processes and dynamic parameters. In this study, we use broadband seismic recordings to trace the dynamic process of the deep-seated Akatani landslide that occurred on the Kii Peninsula, Japan, which is one of the best recorded large slope failures. Based on the previous results of waveform inversions and precise topographic surveys done before and after the event, we applied numerical simulations using the SHALTOP numerical model (Mangeney et al., 2007). This model describes homogeneous continuous granular flows on a 3D topography based on a depth averaged thin layer approximation. We assume a Coulomb's friction law with a constant friction coefficient, i. e. the friction is independent of the sliding velocity. We varied the friction coefficients in the simulation so that the resulting force acting on the surface agrees with the single force estimated from the seismic waveform inversion. Figure shows the force history of the east-west components after the band-pass filtering between 10-100 seconds. The force history of the simulation with frictional coefficient 0.27 (thin red line) the best agrees with the result of seismic waveform inversion (thick gray line). Although the amplitude is slightly different, phases are coherent for the main three pulses. This is an evidence that the point-source approximation works reasonably well for this particular event. The friction coefficient during the sliding was estimated to be 0.38 based on the seismic waveform inversion performed by the previous study and on the sliding block model (Yamada et al., 2013), whereas the frictional coefficient estimated from the numerical simulation was about 0.27. This discrepancy may be due to the digital elevation model, to the other forces such as pressure gradients and centrifugal acceleration included in the model. However, quantitative interpretation of this difference requires further investigation.

  18. 3D Thermal and Mechanical Analysis of a Single Event Burnout

    NASA Astrophysics Data System (ADS)

    Peretti, Gabriela; Demarco, Gustavo; Romero, Eduardo; Tais, Carlos

    2015-08-01

    This paper presents a study related to thermal and mechanical behavior of power DMOS transistors during a Single Event Burnout (SEB) process. We use a cylindrical heat generation region for emulating the thermal and mechanical phenomena related to the SEB. In this way, it is avoided the complexity of the mathematical treatment of the ion-device interaction. This work considers locating the heat generation region in positions that are more realistic than the ones used in previous work. For performing the study, we formulate and validate a new 3D model for the transistor that maintains the computational cost at reasonable level. The resulting mathematical models are solved by means of the Finite Element Method. The simulations results show that the failure dynamics is dominated by the mechanical stress in the metal layer. Additionally, the time to failure depends on the heat source position, for a given power and dimension of the generation region. The results suggest that 3D modeling should be considered for a detailed study of thermal and mechanical effects induced by SEBs.

  19. Classifier for gravitational-wave inspiral signals in nonideal single-detector data

    NASA Astrophysics Data System (ADS)

    Kapadia, S. J.; Dent, T.; Dal Canton, T.

    2017-11-01

    We describe a multivariate classifier for candidate events in a templated search for gravitational-wave (GW) inspiral signals from neutron-star-black-hole (NS-BH) binaries, in data from ground-based detectors where sensitivity is limited by non-Gaussian noise transients. The standard signal-to-noise ratio (SNR) and chi-squared test for inspiral searches use only properties of a single matched filter at the time of an event; instead, we propose a classifier using features derived from a bank of inspiral templates around the time of each event, and also from a search using approximate sine-Gaussian templates. The classifier thus extracts additional information from strain data to discriminate inspiral signals from noise transients. We evaluate a random forest classifier on a set of single-detector events obtained from realistic simulated advanced LIGO data, using simulated NS-BH signals added to the data. The new classifier detects a factor of 1.5-2 more signals at low false positive rates as compared to the standard "reweighted SNR" statistic, and does not require the chi-squared test to be computed. Conversely, if only the SNR and chi-squared values of single-detector events are available, random forest classification performs nearly identically to the reweighted SNR.

  20. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    DOE PAGES

    Acciarri, R.; Adams, C.; An, R.; ...

    2017-03-14

    Here, we present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. Lastly, we also address technical issues that arise when applying this technique to data from a large LArTPCmore » at or near ground level.« less

  1. Design and Implementation of a Motor Incremental Shaft Encoder

    DTIC Science & Technology

    2008-09-01

    SDC Student Design Center VHDL Verilog Hardware Description Language VSC Voltage Source Converters ZCE Zero Crossing Event xiii EXECUTIVE...student to make accurate predictions of voltage source converters ( VSC ) behavior via software simulation; these simulated results could also be... VSC ), and several other off-the-shelf components, a circuit board interface between FPGA and the power source, and a desktop computer [1]. Now, the

  2. A new time-space accounting scheme to predict stream water residence time and hydrograph source components at the watershed scale

    Treesearch

    Takahiro Sayama; Jeffrey J. McDonnell

    2009-01-01

    Hydrograph source components and stream water residence time are fundamental behavioral descriptors of watersheds but, as yet, are poorly represented in most rainfall-runoff models. We present a new time-space accounting scheme (T-SAS) to simulate the pre-event and event water fractions, mean residence time, and spatial source of streamflow at the watershed scale. We...

  3. A comprehensive simulation framework for imaging single particles and biomolecules at the European X-ray Free-Electron Laser

    PubMed Central

    Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.; Samoylova, Liubov; Buzmakov, Alexey; Jurek, Zoltan; Ziaja, Beata; Santra, Robin; Loh, N. Duane; Tschentscher, Thomas; Mancuso, Adrian P.

    2016-01-01

    The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy and incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. We demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design. PMID:27109208

  4. A comprehensive simulation framework for imaging single particles and biomolecules at the European X-ray Free-Electron Laser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.

    The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy andmore » incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. Furthermore, we demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design.« less

  5. A comprehensive simulation framework for imaging single particles and biomolecules at the European X-ray Free-Electron Laser

    DOE PAGES

    Yoon, Chun Hong; Yurkov, Mikhail V.; Schneidmiller, Evgeny A.; ...

    2016-04-25

    The advent of newer, brighter, and more coherent X-ray sources, such as X-ray Free-Electron Lasers (XFELs), represents a tremendous growth in the potential to apply coherent X-rays to determine the structure of materials from the micron-scale down to the Angstrom-scale. There is a significant need for a multi-physics simulation framework to perform source-to-detector simulations for a single particle imaging experiment, including (i) the multidimensional simulation of the X-ray source; (ii) simulation of the wave-optics propagation of the coherent XFEL beams; (iii) atomistic modelling of photon-material interactions; (iv) simulation of the time-dependent diffraction process, including incoherent scattering; (v) assembling noisy andmore » incomplete diffraction intensities into a three-dimensional data set using the Expansion-Maximisation-Compression (EMC) algorithm and (vi) phase retrieval to obtain structural information. Furthermore, we demonstrate the framework by simulating a single-particle experiment for a nitrogenase iron protein using parameters of the SPB/SFX instrument of the European XFEL. This exercise demonstrably yields interpretable consequences for structure determination that are crucial yet currently unavailable for experiment design.« less

  6. The Ongoing and Open-Ended Simulation

    ERIC Educational Resources Information Center

    Cohen, Alexander

    2016-01-01

    This case study explores a novel form of classroom simulation that differs from published examples in two important respects. First, it is ongoing. While most simulations represent a single learning episode embedded within a course, the ongoing simulation is a continuous set of interrelated events and decisions that accompany learning throughout…

  7. Toward real-time regional earthquake simulation of Taiwan earthquakes

    NASA Astrophysics Data System (ADS)

    Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.

    2013-12-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  8. Single event test methodology for integrated optoelectronics

    NASA Technical Reports Server (NTRS)

    Label, Kenneth A.; Cooley, James A.; Stassinopoulos, E. G.; Marshall, Paul; Crabtree, Christina

    1993-01-01

    A single event upset (SEU), defined as a transient or glitch on the output of a device, and its applicability to integrated optoelectronics are discussed in the context of spacecraft design and the need for more than a bit error rate viewpoint for testing and analysis. A methodology for testing integrated optoelectronic receivers and transmitters for SEUs is presented, focusing on the actual test requirements and system schemes needed for integrated optoelectronic devices. Two main causes of single event effects in the space environment, including protons and galactic cosmic rays, are considered along with ground test facilities for simulating the space environment.

  9. Systematic study of target localization for bioluminescence tomography guided radiation therapy

    PubMed Central

    Yu, Jingjing; Zhang, Bin; Iordachita, Iulian I.; Reyes, Juvenal; Lu, Zhihao; Brock, Malcolm V.; Patterson, Michael S.; Wong, John W.

    2016-01-01

    Purpose: To overcome the limitation of CT/cone-beam CT (CBCT) in guiding radiation for soft tissue targets, the authors developed a spectrally resolved bioluminescence tomography (BLT) system for the small animal radiation research platform. The authors systematically assessed the performance of the BLT system in terms of target localization and the ability to resolve two neighboring sources in simulations, tissue-mimicking phantom, and in vivo environments. Methods: Multispectral measurements acquired in a single projection were used for the BLT reconstruction. The incomplete variables truncated conjugate gradient algorithm with an iterative permissible region shrinking strategy was employed as the optimization scheme to reconstruct source distributions. Simulation studies were conducted for single spherical sources with sizes from 0.5 to 3 mm radius at depth of 3–12 mm. The same configuration was also applied for the double source simulation with source separations varying from 3 to 9 mm. Experiments were performed in a standalone BLT/CBCT system. Two self-illuminated sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single-source at 6 and 9 mm depth, two sources at 3 and 5 mm separation at depth of 5 mm, or three sources in the abdomen were also used to illustrate the localization capability of the BLT system for multiple targets in vivo. Results: For simulation study, approximate 1 mm accuracy can be achieved at localizing center of mass (CoM) for single-source and grouped CoM for double source cases. For the case of 1.5 mm radius source, a common tumor size used in preclinical study, their simulation shows that for all the source separations considered, except for the 3 mm separation at 9 and 12 mm depth, the two neighboring sources can be resolved at depths from 3 to 12 mm. Phantom experiments illustrated that 2D bioluminescence imaging failed to distinguish two sources, but BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single-source case at 6 and 9 mm depth, respectively. For the 2 sources in vivo study, both sources can be distinguished at 3 and 5 mm separations, and approximately 1 mm localization accuracy can also be achieved. Conclusions: This study demonstrated that their multispectral BLT/CBCT system could be potentially applied to localize and resolve multiple sources at wide range of source sizes, depths, and separations. The average accuracy of localizing CoM for single-source and grouped CoM for double sources is approximately 1 mm except deep-seated target. The information provided in this study can be instructive to devise treatment margins for BLT-guided irradiation. These results also suggest that the 3D BLT system could guide radiation for the situation with multiple targets, such as metastatic tumor models. PMID:27147371

  10. PARALLAX AND ORBITAL EFFECTS IN ASTROMETRIC MICROLENSING WITH BINARY SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nucita, A. A.; Paolis, F. De; Ingrosso, G.

    2016-06-01

    In gravitational microlensing, binary systems may act as lenses or sources. Identifying lens binarity is generally easy, in particular in events characterized by caustic crossing since the resulting light curve exhibits strong deviations from a smooth single-lensing light curve. In contrast, light curves with minor deviations from a Paczyński behavior do not allow one to identify the source binarity. A consequence of gravitational microlensing is the shift of the position of the multiple image centroid with respect to the source star location — the so-called astrometric microlensing signal. When the astrometric signal is considered, the presence of a binary sourcemore » manifests with a path that largely differs from that expected for single source events. Here, we investigate the astrometric signatures of binary sources taking into account their orbital motion and the parallax effect due to the Earth’s motion, which turn out not to be negligible in most cases. We also show that considering the above-mentioned effects is important in the analysis of astrometric data in order to correctly estimate the lens-event parameters.« less

  11. Single-Event Effects in High-Frequency Linear Amplifiers: Experiment and Analysis

    NASA Astrophysics Data System (ADS)

    Zeinolabedinzadeh, Saeed; Ying, Hanbin; Fleetwood, Zachary E.; Roche, Nicolas J.-H.; Khachatrian, Ani; McMorrow, Dale; Buchner, Stephen P.; Warner, Jeffrey H.; Paki-Amouzou, Pauline; Cressler, John D.

    2017-01-01

    The single-event transient (SET) response of two different silicon-germanium (SiGe) X-band (8-12 GHz) low noise amplifier (LNA) topologies is fully investigated in this paper. The two LNAs were designed and implemented in 130nm SiGe HBT BiCMOS process technology. Two-photon absorption (TPA) laser pulses were utilized to induce transients within various devices in these LNAs. Impulse response theory is identified as a useful tool for predicting the settling behavior of the LNAs subjected to heavy ion strikes. Comprehensive device and circuit level modeling and simulations were performed to accurately simulate the behavior of the circuits under ion strikes. The simulations agree well with TPA measurements. The simulation, modeling and analysis presented in this paper can be applied for any other circuit topologies for SET modeling and prediction.

  12. Toward real-time regional earthquake simulation II: Real-time Online earthquake Simulation (ROS) of Taiwan earthquakes

    NASA Astrophysics Data System (ADS)

    Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh

    2014-06-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  13. Retrieving both phase and amplitude information of Green's functions by ambient seismic wave field cross-correlation: A case study with a limestone mine induced seismic event

    NASA Astrophysics Data System (ADS)

    Kwak, S.; Song, S. G.; Kim, G.; Shin, J. S.

    2015-12-01

    Recently many seismologists have paid attention to ambient seismic field, which is no more referred as noise and called as Earth's hum, but as useful signal to understand subsurface seismic velocity structure. It has also been demonstrated that empirical Green's functions can be constructed by retrieving both phase and amplitude information from ambient seismic field (Prieto and Beroza 2008). The constructed empirical Green's functions can be used to predict strong ground motions after focal depth and double-couple mechanism corrections (Denolle et al. 2013). They do not require detailed subsurface velocity model and intensive computation for ground motion simulation. In this study, we investigate the capability of predicting long period surface waves by the ambient seismic wave field with a seismic event of Mw 4.0, which occurred with a limestone mine collapse in South Korea on January 31, 2015. This limestone-mine event provides an excellent opportunity to test the efficiency of the ambient seismic wave field in retrieving both phase and amplitude information of Green's functions due to the single force mechanism of the collapse event. In other words, both focal depth and double-couple mechanism corrections are not required for this event. A broadband seismic station, which is about 5.4 km away from the mine event, is selected as a source station. Then surface waves retrieved from the ambient seismic wave field cross-correlation are compared with those generated by the event. Our preliminary results show some potential of the ambient seismic wave field in retrieving both phase and amplitude of Green's functions from a single force impulse source at the Earth's surface. More comprehensive analysis by increasing the time length of stacking may improve the results in further studies. We also aim to investigate the efficiency of retrieving the full empirical Green's functions with the 2007 Mw 4.6 Odaesan earthquake, which is one of the strongest earthquakes occurred in South Korea in the last decade.

  14. On simulating large earthquakes by Green's-function addition of smaller earthquakes

    NASA Astrophysics Data System (ADS)

    Joyner, William B.; Boore, David M.

    Simulation of ground motion from large earthquakes has been attempted by a number of authors using small earthquakes (subevents) as Green's functions and summing them, generally in a random way. We present a simple model for the random summation of subevents to illustrate how seismic scaling relations can be used to constrain methods of summation. In the model η identical subevents are added together with their start times randomly distributed over the source duration T and their waveforms scaled by a factor κ. The subevents can be considered to be distributed on a fault with later start times at progressively greater distances from the focus, simulating the irregular propagation of a coherent rupture front. For simplicity the distance between source and observer is assumed large compared to the source dimensions of the simulated event. By proper choice of η and κ the spectrum of the simulated event deduced from these assumptions can be made to conform at both low- and high-frequency limits to any arbitrary seismic scaling law. For the ω -squared model with similarity (that is, with constant Moƒ3o scaling, where ƒo is the corner frequency), the required values are η = (Mo/Moe)4/3 and κ = (Mo/Moe)-1/3, where Mo is moment of the simulated event and Moe is the moment of the subevent. The spectra resulting from other choices of η and κ, will not conform at both high and low frequency. If η is determined by the ratio of the rupture area of the simulated event to that of the subevent and κ = 1, the simulated spectrum will conform at high frequency to the ω-squared model with similarity, but not at low frequency. Because the high-frequency part of the spectrum is generally the important part for engineering applications, however, this choice of values for η and κ may be satisfactory in many cases. If η is determined by the ratio of the moment of the simulated event to that of the subevent and κ = 1, the simulated spectrum will conform at low frequency to the ω-squared model with similarity, but not at high frequency. Interestingly, the high-frequency scaling implied by this latter choice of η and κ corresponds to an ω-squared model with constant Moƒ4o—a scaling law proposed by Nuttli, although questioned recently by Haar and others. Simple scaling with κ equal to unity and η equal to the moment ratio would work if the high-frequency spectral decay were ω-1.5 instead of ω-2. Just the required decay is exhibited by the stochastic source model recently proposed by Joynet, if the dislocation-time function is deconvolved out of the spectrum. Simulated motions derived from such source models could be used as subevents rather than recorded motions as is usually done. This strategy is a promising approach to simulation of ground motion from an extended rupture.

  15. Collective odor source estimation and search in time-variant airflow environments using mobile robots.

    PubMed

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming

    2011-01-01

    This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots' search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot's detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection-diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method.

  16. Collective Odor Source Estimation and Search in Time-Variant Airflow Environments Using Mobile Robots

    PubMed Central

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming

    2011-01-01

    This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650

  17. Application of RADSAFE to Model Single Event Upset Response of a 0.25 micron CMOS SRAM

    NASA Technical Reports Server (NTRS)

    Warren, Kevin M.; Weller, Robert A.; Sierawski, Brian; Reed, Robert A.; Mendenhall, Marcus H.; Schrimpf, Ronald D.; Massengill, Lloyd; Porter, Mark; Wilkerson, Jeff; LaBel, Kenneth A.; hide

    2006-01-01

    The RADSAFE simulation framework is described and applied to model Single Event Upsets (SEU) in a 0.25 micron CMOS 4Mbit Static Random Access Memory (SRAM). For this circuit, the RADSAFE approach produces trends similar to those expected from classical models, but more closely represents the physical mechanisms responsible for SEU in the SRAM circuit.

  18. Dual Interlocked Logic for Single-Event Transient Mitigation

    DTIC Science & Technology

    2017-03-01

    SPICE simulation and fault-injection analysis. Exemplar SPICE simulations have been performed in a 32nm partially- depleted silicon-on-insulator...in this work. The model has been validated at the 32nm SOI technology node with extensive heavy-ion data [7]. For the SPICE simulations, three

  19. 137Cs activities and 135Cs/137Cs isotopic ratios from soils at Idaho National Laboratory: a case study for contaminant source attribution in the vicinity of nuclear facilities.

    PubMed

    Snow, Mathew S; Snyder, Darin C; Clark, Sue B; Kelley, Morgan; Delmore, James E

    2015-03-03

    Radiometric and mass spectrometric analyses of Cs contamination in the environment can reveal the location of Cs emission sources, release mechanisms, modes of transport, prediction of future contamination migration, and attribution of contamination to specific generator(s) and/or process(es). The Subsurface Disposal Area (SDA) at Idaho National Laboratory (INL) represents a complicated case study for demonstrating the current capabilities and limitations to environmental Cs analyses. (137)Cs distribution patterns, (135)Cs/(137)Cs isotope ratios, known Cs chemistry at this site, and historical records enable narrowing the list of possible emission sources and release events to a single source and event, with the SDA identified as the emission source and flood transport of material from within Pit 9 and Trench 48 as the primary release event. These data combined allow refining the possible number of waste generators from dozens to a single generator, with INL on-site research and reactor programs identified as the most likely waste generator. A discussion on the ultimate limitations to the information that (135)Cs/(137)Cs ratios alone can provide is presented and includes (1) uncertainties in the exact date of the fission event and (2) possibility of mixing between different Cs source terms (including nuclear weapons fallout and a source of interest).

  20. 137 Cs Activities and 135 Cs/ 137 Cs Isotopic Ratios from Soils at Idaho National Laboratory: A Case Study for Contaminant Source Attribution in the Vicinity of Nuclear Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snow, Mathew S.; Snyder, Darin C.; Clark, Sue B.

    2015-03-03

    Radiometric and mass spectrometric analyses of Cs contamination in the environment can reveal the location of Cs emission sources, release mechanisms, modes of transport, prediction of future contamination migration, and attribution of contamination to specific generator(s) and/or process(es). The Subsurface Disposal Area (SDA) at Idaho National Laboratory (INL) represents a complicated case study for demonstrating the current capabilities and limitations to environmental Cs analyses. 137Cs distribution patterns, 135Cs/ 137Cs isotope ratios, known Cs chemistry at this site, and historical records enable narrowing the list of possible emission sources and release events to a single source and event, with the SDAmore » identified as the emission source and flood transport of material from within Pit 9 and Trench 48 as the primary release event. These data combined allow refining the possible number of waste generators from dozens to a single generator, with INL on-site research and reactor programs identified as the most likely waste generator. A discussion on the ultimate limitations to the information that 135Cs/ 137Cs ratios alone can provide is presented and includes (1) uncertainties in the exact date of the fission event and (2) possibility of mixing between different Cs source terms (including nuclear weapons fallout and a source of interest).« less

  1. The scientific challenges to forecasting the propagation of space weather through the heliosphere (Invited)

    NASA Astrophysics Data System (ADS)

    van der Holst, B.; Manchester, W.; Sokolov, I.; Toth, G.; Gombosi, T. I.

    2013-12-01

    Coronal mass ejections (CMEs) are a major source of potentially destructive space weather conditions. Understanding and forecasting these events are of utmost importance. In this presentation we discuss the progress towards a physics-based predictive capability within the Space Weather Modeling Framework (SWMF). We demonstrate our latest development in the AWSoM (Alfven Wave Solar Model) global model of the solar corona and inner heliosphere. This model accounts for the coupled thermodynamics of the electrons and protons via single fluid magnetohydrodynamics. The coronal heating and solar wind acceleration are addressed with Alfvén wave turbulence. The realistic 3D magnetic field is simulated using data from the photospheric magnetic field measurements. The AWSoM model serves as a workhorse for modeling CMEs from initial eruption to prediction at 1AU. With selected events we will demonstrate the complexity and challenges associated with CME propagation.

  2. The elementary events of Ca2+ release elicited by membrane depolarization in mammalian muscle.

    PubMed

    Csernoch, L; Zhou, J; Stern, M D; Brum, G; Ríos, E

    2004-05-15

    Cytosolic [Ca(2+)] transients elicited by voltage clamp depolarization were examined by confocal line scanning of rat skeletal muscle fibres. Ca(2+) sparks were observed in the fibres' membrane-permeabilized ends, but not in responses to voltage in the membrane-intact area. Elementary events of the depolarization-evoked response could be separated either at low voltages (near -50 mV) or at -20 mV in partially inactivated cells. These were of lower amplitude, narrower and of much longer duration than sparks, similar to 'lone embers' observed in the permeabilized segments. Their average amplitude was 0.19 and spatial half-width 1.3 microm. Other parameters depended on voltage. At -50 mV average duration was 111 ms and latency 185 ms. At -20 mV duration was 203 ms and latency 24 ms. Ca(2+) release current, calculated on an average of events, was nearly steady at 0.5-0.6 pA. Accordingly, simulations of the fluorescence event elicited by a subresolution source of 0.5 pA open for 100 ms had morphology similar to the experimental average. Because 0.5 pA is approximately the current measured for single RyR channels in physiological conditions, the elementary fluorescence events in rat muscle probably reflect opening of a single RyR channel. A reconstruction of cell-averaged release flux at -20 mV based on the observed distribution of latencies and calculated elementary release had qualitatively correct but slower kinetics than the release flux in prior whole-cell measurements. The qualitative agreement indicates that global Ca(2+) release flux results from summation of these discrete events. The quantitative discrepancies suggest that the partial inactivation strategy may lead to events of greater duration than those occurring physiologically in fully polarized cells.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acciarri, R.; Adams, C.; An, R.

    Here, we present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. Lastly, we also address technical issues that arise when applying this technique to data from a large LArTPCmore » at or near ground level.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acciarri, R.; Adams, C.; An, R.

    We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at ormore » near ground level.« less

  5. Single-Event Upset (SEU) model verification and threshold determination using heavy ions in a bipolar static RAM

    NASA Technical Reports Server (NTRS)

    Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.; Thieberger, P.; Wegner, H. E.

    1985-01-01

    Single-Event Upset (SEU) response of a bipolar low-power Schottky-diode-clamped TTL static RAM has been observed using Br ions in the 100-240 MeV energy range and O ions in the 20-100 MeV range. These data complete the experimental verification of circuit-simulation SEU modeling for this device. The threshold for onset of SEU has been observed by the variation of energy, ion species and angle of incidence. The results obtained from the computer circuit-simulation modeling and experimental model verification demonstrate a viable methodology for modeling SEU in bipolar integrated circuits.

  6. Event-related potential variations in the encoding and retrieval of different amounts of contextual information.

    PubMed

    Estrada-Manilla, Cinthya; Cansino, Selene

    2012-06-15

    Episodic memory events occur within multidimensional contexts; however, the electrophysiological manifestations associated with processing of more than one context have been rarely investigated. The effect of the amount of context on the ERPs was studied using two single and one double source memory tasks and by comparing full and partial context retrieval within a double source task. The single source tasks elicited waveforms with a larger amplitude during successful encoding and retrieval than the double source task. Compared with the waveforms elicited with a full source response, a partial source response elicited waveforms with a smaller amplitude, probably because the retrieval success for one context was combined with the retrieval attempt processes for the missing source. Comparing the tasks revealed that the larger the amount of contextual information processed, the smaller the amplitude of the ERPs, indicating that greater effort or further control processes were required during double source retrieval. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Thermal Conductivity of Single-Walled Carbon Nanotube with Internal Heat Source Studied by Molecular Dynamics Simulation

    NASA Astrophysics Data System (ADS)

    Li, Yuan-Wei; Cao, Bing-Yang

    2013-12-01

    The thermal conductivity of (5, 5) single-walled carbon nanotubes (SWNTs) with an internal heat source is investigated by using nonequilibrium molecular dynamics (NEMD) simulation incorporating uniform heat source and heat source-and-sink schemes. Compared with SWNTs without an internal heat source, i.e., by a fixed-temperature difference scheme, the thermal conductivity of SWNTs with an internal heat source is much lower, by as much as half in some cases, though it still increases with an increase of the tube length. Based on the theory of phonon dynamics, a function called the phonon free path distribution is defined to develop a simple one-dimensional heat conduction model considering an internal heat source, which can explain diffusive-ballistic heat transport in carbon nanotubes well.

  8. Characterizing SRAM Single Event Upset in Terms of Single and Double Node Charge Collection

    NASA Technical Reports Server (NTRS)

    Black, J. D.; Ball, D. R., II; Robinson, W. H.; Fleetwood, D. M.; Schrimpf, R. D.; Reed, R. A.; Black, D. A.; Warren, K. M.; Tipton, A. D.; Dodd, P. E.; hide

    2008-01-01

    A well-collapse source-injection mode for SRAM SEU is demonstrated through TCAD modeling. The recovery of the SRAM s state is shown to be based upon the resistive path from the p+-sources in the SRAM to the well. Multiple cell upset patterns for direct charge collection and the well-collapse source-injection mechanisms are then predicted and compared to recent SRAM test data.

  9. Long period seismic signals observed before the Caldera formation during the 2000 Miyake- jima volcanic activity

    NASA Astrophysics Data System (ADS)

    Ohminato, T.; Kobayashi, T.; Ida, Y.; Fujita, E.

    2006-12-01

    During the 2000 Miyake-jima volcanic activity started on 26 June 2000, an intense earthquake swarm occurred initially beneath the southwest flank near the summit and gradually migrated west of the island. A volcanic earthquake activity in the island was reactivated beneath the summit, leading to a summit eruption with a significant summit subsidence on 8 July. We detected small but numerous number of long period (LP) seismic signals during these activities. Most of them include both 0.2 and 0.4 Hz components suggesting an existence of a harmonic oscillator. Some of them have dominant frequency peak at 0.2Hz (LP1), while others have one at 0.4 Hz (LP2). At the beginning of each waveform of both LP1 and LP2, an impulsive signal with a pulse-width of about 2 s is clearly identified. The major axis of the particle motion for the initial impulsive signal is almost horizontal suggesting a shallow source beneath the summit, while the inclined particle motion for the latter phase suggests deeper source beneath the island. For both LP1 and LP2, we can identify a clear positive correlation between the amplitude of the initial pulse and that of the latter phase. We conducted waveform inversions for the LP events assuming a point source and determined the locations and mechanisms simultaneously. We assumed three types of source mechanisms; three single forces, six moment tensor components, and a combination of moment tensor and single forces. We used AIC to decide the optimal solutions. Firstly, we applied the method to the entire waveform including both the initial pulse and the latter phase. The source type with a combination of moment tensor and single force components yields the minimum values of the AIC for both LP events. However, the spatial distribution of the residual errors tends to have two local minima. Considering the error distribution and the characteristic particle motions, it is likely that the source of the LP event consists of two different parts. We thus divided the LP events into two parts; the initial and the latter phases, and applied the same waveform inversion procedure separately for each part of the waveform. The inversion results show that the initial impulsive phase and the latter oscillatory phase are well explained by a nearly horizontal single force and a moment solution, respectively. The single force solutions of the initial pulse are positioned at the depth of about 2 km beneath the summit. The single force initially oriented to the north, and then to the south. On the other hand, the sources of the moment solutions are significantly deeper than the single force solutions. The hypocenter of the later phase of LP1 is located at the depth of 5.5 km in the southern region of the island, while that for the LP2 event is at 5.1 km beneath the summit. The horizontal oscillations are relatively dominant for both the LP1 and LP2 events. Although the two sources are separated each other by several kilometers, the positive correlation between the amplitudes of the initial pulse and the latter phase strongly suggests that the shallow sources trigger the deeper sources. The source time histories of the 6 moment tensor components of the latter portion of the LP1 and LP2 are not in phase. This makes it difficult to extract information on source geometry using the amplitude ratio among moment tensor components in a traditional manner. It may suggest that the source is composed of two independent sources whose oscillations are out of phase.

  10. Improving Aircraft Refueling Procedures at Naval Air Station Oceana

    DTIC Science & Technology

    2012-06-01

    Station (NAS) Oceana, VA, using aircraft waiting time for fuel as a measure of performance. We develop a computer-assisted discrete-event simulation to...Station (NAS) Oceana, VA, using aircraft waiting time for fuel as a measure of performance. We develop a computer-assisted discrete-event simulation...server queue, with general interarrival and service time distributions gpm Gallons per minute JDK Java development kit M/M/1 Single-server queue

  11. Systematic study of target localization for bioluminescence tomography guided radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Jingjing; Zhang, Bin; Reyes, Juvenal

    Purpose: To overcome the limitation of CT/cone-beam CT (CBCT) in guiding radiation for soft tissue targets, the authors developed a spectrally resolved bioluminescence tomography (BLT) system for the small animal radiation research platform. The authors systematically assessed the performance of the BLT system in terms of target localization and the ability to resolve two neighboring sources in simulations, tissue-mimicking phantom, and in vivo environments. Methods: Multispectral measurements acquired in a single projection were used for the BLT reconstruction. The incomplete variables truncated conjugate gradient algorithm with an iterative permissible region shrinking strategy was employed as the optimization scheme to reconstructmore » source distributions. Simulation studies were conducted for single spherical sources with sizes from 0.5 to 3 mm radius at depth of 3–12 mm. The same configuration was also applied for the double source simulation with source separations varying from 3 to 9 mm. Experiments were performed in a standalone BLT/CBCT system. Two self-illuminated sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single-source at 6 and 9 mm depth, two sources at 3 and 5 mm separation at depth of 5 mm, or three sources in the abdomen were also used to illustrate the localization capability of the BLT system for multiple targets in vivo. Results: For simulation study, approximate 1 mm accuracy can be achieved at localizing center of mass (CoM) for single-source and grouped CoM for double source cases. For the case of 1.5 mm radius source, a common tumor size used in preclinical study, their simulation shows that for all the source separations considered, except for the 3 mm separation at 9 and 12 mm depth, the two neighboring sources can be resolved at depths from 3 to 12 mm. Phantom experiments illustrated that 2D bioluminescence imaging failed to distinguish two sources, but BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single-source case at 6 and 9 mm depth, respectively. For the 2 sources in vivo study, both sources can be distinguished at 3 and 5 mm separations, and approximately 1 mm localization accuracy can also be achieved. Conclusions: This study demonstrated that their multispectral BLT/CBCT system could be potentially applied to localize and resolve multiple sources at wide range of source sizes, depths, and separations. The average accuracy of localizing CoM for single-source and grouped CoM for double sources is approximately 1 mm except deep-seated target. The information provided in this study can be instructive to devise treatment margins for BLT-guided irradiation. These results also suggest that the 3D BLT system could guide radiation for the situation with multiple targets, such as metastatic tumor models.« less

  12. A Bivariate Mixed Distribution with a Heavy-tailed Component and its Application to Single-site Daily Rainfall Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Chao ..; Singh, Vijay P.; Mishra, Ashok K.

    2013-02-06

    This paper presents an improved brivariate mixed distribution, which is capable of modeling the dependence of daily rainfall from two distinct sources (e.g., rainfall from two stations, two consecutive days, or two instruments such as satellite and rain gauge). The distribution couples an existing framework for building a bivariate mixed distribution, the theory of copulae and a hybrid marginal distribution. Contributions of the improved distribution are twofold. One is the appropriate selection of the bivariate dependence structure from a wider admissible choice (10 candidate copula families). The other is the introduction of a marginal distribution capable of better representing lowmore » to moderate values as well as extremes of daily rainfall. Among several applications of the improved distribution, particularly presented here is its utility for single-site daily rainfall simulation. Rather than simulating rainfall occurrences and amounts separately, the developed generator unifies the two processes by generalizing daily rainfall as a Markov process with autocorrelation described by the improved bivariate mixed distribution. The generator is first tested on a sample station in Texas. Results reveal that the simulated and observed sequences are in good agreement with respect to essential characteristics. Then, extensive simulation experiments are carried out to compare the developed generator with three other alternative models: the conventional two-state Markov chain generator, the transition probability matrix model and the semi-parametric Markov chain model with kernel density estimation for rainfall amounts. Analyses establish that overall the developed generator is capable of reproducing characteristics of historical extreme rainfall events and is apt at extrapolating rare values beyond the upper range of available observed data. Moreover, it automatically captures the persistence of rainfall amounts on consecutive wet days in a relatively natural and easy way. Another interesting observation is that the recognized ‘overdispersion’ problem in daily rainfall simulation ascribes more to the loss of rainfall extremes than the under-representation of first-order persistence. The developed generator appears to be a sound option for daily rainfall simulation, especially in particular hydrologic planning situations when rare rainfall events are of great importance.« less

  13. Studies Of Single-Event-Upset Models

    NASA Technical Reports Server (NTRS)

    Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.

    1988-01-01

    Report presents latest in series of investigations of "soft" bit errors known as single-event upsets (SEU). In this investigation, SEU response of low-power, Schottky-diode-clamped, transistor/transistor-logic (TTL) static random-access memory (RAM) observed during irradiation by Br and O ions in ranges of 100 to 240 and 20 to 100 MeV, respectively. Experimental data complete verification of computer model used to simulate SEU in this circuit.

  14. Multithreaded Stochastic PDES for Reactions and Diffusions in Neurons.

    PubMed

    Lin, Zhongwei; Tropper, Carl; Mcdougal, Robert A; Patoary, Mohammand Nazrul Ishlam; Lytton, William W; Yao, Yiping; Hines, Michael L

    2017-07-01

    Cells exhibit stochastic behavior when the number of molecules is small. Hence a stochastic reaction-diffusion simulator capable of working at scale can provide a more accurate view of molecular dynamics within the cell. This paper describes a parallel discrete event simulator, Neuron Time Warp-Multi Thread (NTW-MT), developed for the simulation of reaction diffusion models of neurons. To the best of our knowledge, this is the first parallel discrete event simulator oriented towards stochastic simulation of chemical reactions in a neuron. The simulator was developed as part of the NEURON project. NTW-MT is optimistic and thread-based, which attempts to capitalize on multi-core architectures used in high performance machines. It makes use of a multi-level queue for the pending event set and a single roll-back message in place of individual anti-messages to disperse contention and decrease the overhead of processing rollbacks. Global Virtual Time is computed asynchronously both within and among processes to get rid of the overhead for synchronizing threads. Memory usage is managed in order to avoid locking and unlocking when allocating and de-allocating memory and to maximize cache locality. We verified our simulator on a calcium buffer model. We examined its performance on a calcium wave model, comparing it to the performance of a process based optimistic simulator and a threaded simulator which uses a single priority queue for each thread. Our multi-threaded simulator is shown to achieve superior performance to these simulators. Finally, we demonstrated the scalability of our simulator on a larger CICR model and a more detailed CICR model.

  15. Validation of SWEEP for creep, saltation, and suspension in a desert-oasis ecotone

    NASA Astrophysics Data System (ADS)

    Pi, H.; Sharratt, B.; Feng, G.; Lei, J.; Li, X.; Zheng, Z.

    2016-03-01

    Wind erosion in the desert-oasis ecotone can accelerate desertification, but little is known about the susceptibility of the ecotone to wind erosion in the Tarim Basin despite being a major source of windblown dust in China. The objective of this study was to test the performance of the Single-event Wind Erosion Evaluation Program (SWEEP) in simulating soil loss as creep, saltation, and suspension in a desert-oasis ecotone. Creep, saltation, and suspension were measured and simulated in a desert-oasis ecotone of the Tarim Basin during discrete periods of high winds in spring 2012 and 2013. The model appeared to adequately simulate total soil loss (ranged from 23 to 2272 g m-2 across sample periods) according to the high index of agreement (d = 0.76). The adequate agreement of the SWEEP in simulating total soil loss was due to the good performance of the model (d = 0.71) in simulating creep plus saltation. The SWEEP model, however, inadequately simulated suspension based upon a low d (⩽0.43). The slope estimates of the regression between simulated and measured suspension and difference of mean suggested that the SWEEP underestimated suspension. The adequate simulation of creep plus saltation thus provides reasonable estimates of total soil loss using SWEEP in a desert-oasis environment.

  16. Adaptive Neural Network-Based Event-Triggered Control of Single-Input Single-Output Nonlinear Discrete-Time Systems.

    PubMed

    Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani

    2016-01-01

    This paper presents a novel adaptive neural network (NN) control of single-input and single-output uncertain nonlinear discrete-time systems under event sampled NN inputs. In this control scheme, the feedback signals are transmitted, and the NN weights are tuned in an aperiodic manner at the event sampled instants. After reviewing the NN approximation property with event sampled inputs, an adaptive state estimator (SE), consisting of linearly parameterized NNs, is utilized to approximate the unknown system dynamics in an event sampled context. The SE is viewed as a model and its approximated dynamics and the state vector, during any two events, are utilized for the event-triggered controller design. An adaptive event-trigger condition is derived by using both the estimated NN weights and a dead-zone operator to determine the event sampling instants. This condition both facilitates the NN approximation and reduces the transmission of feedback signals. The ultimate boundedness of both the NN weight estimation error and the system state vector is demonstrated through the Lyapunov approach. As expected, during an initial online learning phase, events are observed more frequently. Over time with the convergence of the NN weights, the inter-event times increase, thereby lowering the number of triggered events. These claims are illustrated through the simulation results.

  17. Proactive modeling of water quality impacts of extreme precipitation events in a drinking water reservoir.

    PubMed

    Jeznach, Lillian C; Hagemann, Mark; Park, Mi-Hyun; Tobiason, John E

    2017-10-01

    Extreme precipitation events are of concern to managers of drinking water sources because these occurrences can affect both water supply quantity and quality. However, little is known about how these low probability events impact organic matter and nutrient loads to surface water sources and how these loads may impact raw water quality. This study describes a method for evaluating the sensitivity of a water body of interest from watershed input simulations under extreme precipitation events. An example application of the method is illustrated using the Wachusett Reservoir, an oligo-mesotrophic surface water reservoir in central Massachusetts and a major drinking water supply to metropolitan Boston. Extreme precipitation event simulations during the spring and summer resulted in total organic carbon, UV-254 (a surrogate measurement for reactive organic matter), and total algae concentrations at the drinking water intake that exceeded recorded maximums. Nutrient concentrations after storm events were less likely to exceed recorded historical maximums. For this particular reservoir, increasing inter-reservoir transfers of water with lower organic matter content after a large precipitation event has been shown in practice and in model simulations to decrease organic matter levels at the drinking water intake, therefore decreasing treatment associated oxidant demand, energy for UV disinfection, and the potential for formation of disinfection byproducts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. SINGLE EVENT EFFECTS TEST FACILITY AT OAK RIDGE NATIONAL LABORATORY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riemer, Bernie; Gallmeier, Franz X; Dominik, Laura J

    2015-01-01

    Increasing use of microelectronics of ever diminishing feature size in avionics systems has led to a growing Single Event Effects (SEE) susceptibility arising from the highly ionizing interactions of cosmic rays and solar particles. Single event effects caused by atmospheric radiation have been recognized in recent years as a design issue for avionics equipment and systems. To ensure a system meets all its safety and reliability requirements, SEE induced upsets and potential system failures need to be considered, including testing of the components and systems in a neutron beam. Testing of ICs and systems for use in radiation environments requiresmore » the utilization of highly advanced laboratory facilities that can run evaluations on microcircuits for the effects of radiation. This paper provides a background of the atmospheric radiation phenomenon and the resulting single event effects, including single event upset (SEU) and latch up conditions. A study investigating requirements for future single event effect irradiation test facilities and developing options at the Spallation Neutron Source (SNS) is summarized. The relatively new SNS with its 1.0 GeV proton beam, typical operation of 5000 h per year, expertise in spallation neutron sources, user program infrastructure, and decades of useful life ahead is well suited for hosting a world-class SEE test facility in North America. Emphasis was put on testing of large avionics systems while still providing tunable high flux irradiation conditions for component tests. Makers of ground-based systems would also be served well by these facilities. Three options are described; the most capable, flexible, and highest-test-capacity option is a new stand-alone target station using about one kW of proton beam power on a gas-cooled tungsten target, with dual test enclosures. Less expensive options are also described.« less

  19. Single Event Effects Test Facility Options at the Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riemer, Bernie; Gallmeier, Franz X; Dominik, Laura J

    2015-01-01

    Increasing use of microelectronics of ever diminishing feature size in avionics systems has led to a growing Single Event Effects (SEE) susceptibility arising from the highly ionizing interactions of cosmic rays and solar particles. Single event effects caused by atmospheric radiation have been recognized in recent years as a design issue for avionics equipment and systems. To ensure a system meets all its safety and reliability requirements, SEE induced upsets and potential system failures need to be considered, including testing of the components and systems in a neutron beam. Testing of integrated circuits (ICs) and systems for use in radiationmore » environments requires the utilization of highly advanced laboratory facilities that can run evaluations on microcircuits for the effects of radiation. This paper provides a background of the atmospheric radiation phenomenon and the resulting single event effects, including single event upset (SEU) and latch up conditions. A study investigating requirements for future single event effect irradiation test facilities and developing options at the Spallation Neutron Source (SNS) is summarized. The relatively new SNS with its 1.0 GeV proton beam, typical operation of 5000 h per year, expertise in spallation neutron sources, user program infrastructure, and decades of useful life ahead is well suited for hosting a world-class SEE test facility in North America. Emphasis was put on testing of large avionics systems while still providing tunable high flux irradiation conditions for component tests. Makers of ground-based systems would also be served well by these facilities. Three options are described; the most capable, flexible, and highest-test-capacity option is a new stand-alone target station using about one kW of proton beam power on a gas-cooled tungsten target, with dual test enclosures. Less expensive options are also described.« less

  20. Simulations of Cloud-Radiation Interaction Using Large-Scale Forcing Derived from the CINDY/DYNAMO Northern Sounding Array

    NASA Technical Reports Server (NTRS)

    Wang, Shuguang; Sobel, Adam H.; Fridlind, Ann; Feng, Zhe; Comstock, Jennifer M.; Minnis, Patrick; Nordeen, Michele L.

    2015-01-01

    The recently completed CINDY/DYNAMO field campaign observed two Madden-Julian oscillation (MJO) events in the equatorial Indian Ocean from October to December 2011. Prior work has indicated that the moist static energy anomalies in these events grew and were sustained to a significant extent by radiative feedbacks. We present here a study of radiative fluxes and clouds in a set of cloud-resolving simulations of these MJO events. The simulations are driven by the large-scale forcing data set derived from the DYNAMO northern sounding array observations, and carried out in a doubly periodic domain using the Weather Research and Forecasting (WRF) model. Simulated cloud properties and radiative fluxes are compared to those derived from the S-PolKa radar and satellite observations. To accommodate the uncertainty in simulated cloud microphysics, a number of single-moment (1M) and double-moment (2M) microphysical schemes in the WRF model are tested. The 1M schemes tend to underestimate radiative flux anomalies in the active phases of the MJO events, while the 2M schemes perform better, but can overestimate radiative flux anomalies. All the tested microphysics schemes exhibit biases in the shapes of the histograms of radiative fluxes and radar reflectivity. Histograms of radiative fluxes and brightness temperature indicate that radiative biases are not evenly distributed; the most significant bias occurs in rainy areas with OLR less than 150 W/ cu sq in the 2M schemes. Analysis of simulated radar reflectivities indicates that this radiative flux uncertainty is closely related to the simulated stratiform cloud coverage. Single-moment schemes underestimate stratiform cloudiness by a factor of 2, whereas 2M schemes simulate much more stratiform cloud.

  1. Numerical simulations of Asian dust storms using a coupled climate-aerosol microphysical model

    NASA Astrophysics Data System (ADS)

    Su, Lin; Toon, Owen B.

    2009-07-01

    We have developed a three-dimensional coupled microphysical/climate model based on the National Center for Atmospheric Research Community Atmospheres Model and the University of Colorado/NASA Community Aerosol and Radiation Model for Atmospheres. We have used the model to investigate the sources, removal processes, transport, and optical properties of Asian dust aerosol and its impact on downwind regions. The model simulations are conducted primarily during the time frame of the Aerosol Characterization Experiment-Asia field experiment (March-May 2001) since considerable in situ data are available at that time. Our dust source function follows Ginoux et al. (2001). We modified the dust source function by using the friction velocity instead of the 10-m wind based on wind erosion theory, by adding a size-dependent threshold friction velocity following Marticorena and Bergametti (1995) and by adding a soil moisture correction. A Weibull distribution is implemented to estimate the subgrid-scale wind speed variability. We use eight size bins for mineral dust ranging from 0.1 to 10 μm radius. Generally, the model reproduced the aerosol optical depth retrieved by the ground-based Aerosol Robotic Network (AERONET) Sun photometers at six study sites ranging in location from near the Asian dust sources to the Eastern Pacific region. By constraining the dust complex refractive index from AERONET retrievals near the dust source, we also find the single-scattering albedo to be consistent with AERONET retrievals. However, large regional variations are observed due to local pollution. The timing of dust events is comparable to the National Institute for Environmental Studies (NIES) lidar data in Beijing and Nagasaki. However, the simulated dust aerosols are at higher altitudes than those observed by the NIES lidar.

  2. NPE 2010 results - Independent performance assessment by simulated CTBT violation scenarios

    NASA Astrophysics Data System (ADS)

    Ross, O.; Bönnemann, C.; Ceranna, L.; Gestermann, N.; Hartmann, G.; Plenefisch, T.

    2012-04-01

    For verification of compliance to the Comprehensive Nuclear-Test-Ban Treaty (CTBT) the global International Monitoring System (IMS) is currently being built up. The IMS is designed to detect nuclear explosions through their seismic, hydroacoustic, infrasound, and radionuclide signature. The IMS data are collected, processed to analysis products, and distributed to the state signatories by the International Data Centre (IDC) in Vienna. The state signatories themselves may operate National Data Centers (NDC) giving technical advice concerning CTBT verification to the government. NDC Preparedness Exercises (NPE) are regularly performed to practice the verification procedures for the detection of nuclear explosions in the framework of CTBT monitoring. The initial focus of the NPE 2010 was on the component of radionuclide detections and the application of Atmospheric Transport Modeling (ATM) for defining the source region of a radionuclide event. The exercise was triggered by fictitious radioactive noble gas detections which were calculated beforehand secretly by forward ATM for a hypothetical xenon release scenario starting at location and time of a real seismic event. The task for the exercise participants was to find potential source events by atmospheric backtracking and to analyze in the following promising candidate events concerning their waveform signals. The study shows one possible way of solution for NPE 2010 as it was performed at German NDC by a team without precedent knowledge of the selected event and release scenario. The ATM Source Receptor Sensitivity (SRS) fields as provided by the IDC were evaluated in a logical approach in order to define probable source regions for several days before the first reported fictitious radioactive xenon finding. Additional information on likely event times was derived from xenon isotopic ratios where applicable. Of the considered seismic events in the potential source region all except one could be identified as earthquakes by seismological analysis. The remaining event at Black Thunder Mine, Wyoming, on 23 Oct at 21:15 UTC showed clear explosion characteristics. It caused also Infrasound detections at one station in Canada. An infrasonic one station localization algorithm led to event localization results comparable in precision to the teleseismic localization. However, the analysis of regional seismological stations gave the most accurate result giving an error ellipse of about 60 square kilometer. Finally a forward ATM simulation was performed with the candidate event as source in order to reproduce the original detection scenario. The ATM results showed a simulated station fingerprint in the IMS very similar to the fictitious detections given in the NPE 2010 scenario which is an additional confirmation that the event was correctly identified. The shown event analysis of the NPE 2010 serves as successful example for Data Fusion between the technology of radionuclide detection supported by ATM and seismological methodology as well as infrasound signal processing.

  3. Computational Modeling Approaches to Multiscale Design of Icephobic Surfaces

    NASA Technical Reports Server (NTRS)

    Tallman, Aaron; Wang, Yan; Vargas, Mario

    2017-01-01

    To aid in the design of surfaces that prevent icing, a model and computational simulation of impact ice formation at the single droplet scale was implemented. The nucleation of a single supercooled droplet impacting on a substrate, in rime ice conditions, was simulated. Open source computational fluid dynamics (CFD) software was used for the simulation. To aid in the design of surfaces that prevent icing, a model of impact ice formation at the single droplet scale was proposed•No existing model simulates simultaneous impact and freezing of a single super-cooled water droplet•For the 10-week project, a low-fidelity feasibility study was the goal.

  4. Characterizing Lenses and Lensed Stars of High-magnification Single-lens Gravitational Microlensing Events with Lenses Passing over Source Stars

    NASA Astrophysics Data System (ADS)

    Choi, J.-Y.; Shin, I.-G.; Park, S.-Y.; Han, C.; Gould, A.; Sumi, T.; Udalski, A.; Beaulieu, J.-P.; Street, R.; Dominik, M.; Allen, W.; Almeida, L. A.; Bos, M.; Christie, G. W.; Depoy, D. L.; Dong, S.; Drummond, J.; Gal-Yam, A.; Gaudi, B. S.; Henderson, C. B.; Hung, L.-W.; Jablonski, F.; Janczak, J.; Lee, C.-U.; Mallia, F.; Maury, A.; McCormick, J.; McGregor, D.; Monard, L. A. G.; Moorhouse, D.; Muñoz, J. A.; Natusch, T.; Nelson, C.; Park, B.-G.; Pogge, R. W.; "TG" Tan, T.-G.; Thornley, G.; Yee, J. C.; μFUN Collaboration; Abe, F.; Barnard, E.; Baudry, J.; Bennett, D. P.; Bond, I. A.; Botzler, C. S.; Freeman, M.; Fukui, A.; Furusawa, K.; Hayashi, F.; Hearnshaw, J. B.; Hosaka, S.; Itow, Y.; Kamiya, K.; Kilmartin, P. M.; Kobara, S.; Korpela, A.; Lin, W.; Ling, C. H.; Makita, S.; Masuda, K.; Matsubara, Y.; Miyake, N.; Muraki, Y.; Nagaya, M.; Nishimoto, K.; Ohnishi, K.; Okumura, T.; Omori, K.; Perrott, Y. C.; Rattenbury, N.; Saito, To.; Skuljan, L.; Sullivan, D. J.; Suzuki, D.; Suzuki, K.; Sweatman, W. L.; Takino, S.; Tristram, P. J.; Wada, K.; Yock, P. C. M.; MOA Collaboration; Szymański, M. K.; Kubiak, M.; Pietrzyński, G.; Soszyński, I.; Poleski, R.; Ulaczyk, K.; Wyrzykowski, Ł.; Kozłowski, S.; Pietrukowicz, P.; OGLE Collaboration; Albrow, M. D.; Bachelet, E.; Batista, V.; Bennett, C. S.; Bowens-Rubin, R.; Brillant, S.; Cassan, A.; Cole, A.; Corrales, E.; Coutures, Ch.; Dieters, S.; Dominis Prester, D.; Donatowicz, J.; Fouqué, P.; Greenhill, J.; Kane, S. R.; Menzies, J.; Sahu, K. C.; Wambsganss, J.; Williams, A.; Zub, M.; PLANET Collaboration; Allan, A.; Bramich, D. M.; Browne, P.; Clay, N.; Fraser, S.; Horne, K.; Kains, N.; Mottram, C.; Snodgrass, C.; Steele, I.; Tsapras, Y.; RoboNet Collaboration; Alsubai, K. A.; Bozza, V.; Burgdorf, M. J.; Calchi Novati, S.; Dodds, P.; Dreizler, S.; Finet, F.; Gerner, T.; Glitrup, M.; Grundahl, F.; Hardis, S.; Harpsøe, K.; Hinse, T. C.; Hundertmark, M.; Jørgensen, U. G.; Kerins, E.; Liebig, C.; Maier, G.; Mancini, L.; Mathiasen, M.; Penny, M. T.; Proft, S.; Rahvar, S.; Ricci, D.; Scarpetta, G.; Schäfer, S.; Schönebeck, F.; Skottfelt, J.; Surdej, J.; Southworth, J.; Zimmer, F.; MiNDSTEp Consortium

    2012-05-01

    We present the analysis of the light curves of nine high-magnification single-lens gravitational microlensing events with lenses passing over source stars, including OGLE-2004-BLG-254, MOA-2007-BLG-176, MOA-2007-BLG-233/OGLE-2007-BLG-302, MOA-2009-BLG-174, MOA-2010-BLG-436, MOA-2011-BLG-093, MOA-2011-BLG-274, OGLE-2011-BLG-0990/MOA-2011-BLG-300, and OGLE-2011-BLG-1101/MOA-2011-BLG-325. For all of the events, we measure the linear limb-darkening coefficients of the surface brightness profile of source stars by measuring the deviation of the light curves near the peak affected by the finite-source effect. For seven events, we measure the Einstein radii and the lens-source relative proper motions. Among them, five events are found to have Einstein radii of less than 0.2 mas, making the lenses very low mass star or brown dwarf candidates. For MOA-2011-BLG-274, especially, the small Einstein radius of θE ~ 0.08 mas combined with the short timescale of t E ~ 2.7 days suggests the possibility that the lens is a free-floating planet. For MOA-2009-BLG-174, we measure the lens parallax and thus uniquely determine the physical parameters of the lens. We also find that the measured lens mass of ~0.84 M ⊙ is consistent with that of a star blended with the source, suggesting that the blend is likely to be the lens. Although we did not find planetary signals for any of the events, we provide exclusion diagrams showing the confidence levels excluding the existence of a planet as a function of the separation and mass ratio.

  5. Rapid Monte Carlo Simulation of Gravitational Wave Galaxies

    NASA Astrophysics Data System (ADS)

    Breivik, Katelyn; Larson, Shane L.

    2015-01-01

    With the detection of gravitational waves on the horizon, astrophysical catalogs produced by gravitational wave observatories can be used to characterize the populations of sources and validate different galactic population models. Efforts to simulate gravitational wave catalogs and source populations generally focus on population synthesis models that require extensive time and computational power to produce a single simulated galaxy. Monte Carlo simulations of gravitational wave source populations can also be used to generate observation catalogs from the gravitational wave source population. Monte Carlo simulations have the advantes of flexibility and speed, enabling rapid galactic realizations as a function of galactic binary parameters with less time and compuational resources required. We present a Monte Carlo method for rapid galactic simulations of gravitational wave binary populations.

  6. Central magnetic anomalies of Nectarian-aged lunar impact basins: Probable evidence for an early core dynamo

    NASA Astrophysics Data System (ADS)

    Hood, Lon L.

    2011-02-01

    A re-examination of all available low-altitude LP magnetometer data confirms that magnetic anomalies are present in at least four Nectarian-aged lunar basins: Moscoviense, Mendel-Rydberg, Humboldtianum, and Crisium. In three of the four cases, a single main anomaly is present near the basin center while, in the case of Crisium, anomalies are distributed in a semi-circular arc about the basin center. These distributions, together with a lack of other anomalies near the basins, indicate that the sources of the anomalies are genetically associated with the respective basin-forming events. These central basin anomalies are difficult to attribute to shock remanent magnetization of a shocked central uplift and most probably imply thermoremanent magnetization of impact melt rocks in a steady magnetizing field. Iterative forward modeling of the single strongest and most isolated anomaly, the northern Crisium anomaly, yields a paleomagnetic pole position at 81° ± 19°N, 143° ± 31°E, not far from the present rotational pole. Assuming no significant true polar wander since the Crisium impact, this position is consistent with that expected for a core dynamo magnetizing field. Further iterative forward modeling demonstrates that the remaining Crisium anomalies can be approximately simulated assuming a multiple source model with a single magnetization direction equal to that inferred for the northernmost anomaly. This result is most consistent with a steady, large-scale magnetizing field. The inferred mean magnetization intensity within the strongest basin sources is ˜1 A/m assuming a 1-km thickness for the source layer. Future low-altitude orbital and surface magnetometer measurements will more strongly constrain the depth and/or thicknesses of the sources.

  7. WE-FG-BRA-06: Systematic Study of Target Localization for Bioluminescence Tomography Guided Radiation Therapy for Preclinical Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B; Reyes, J; Wong, J

    Purpose: To overcome the limitation of CT/CBCT in guiding radiation for soft tissue targets, we developed a bioluminescence tomography(BLT) system for preclinical radiation research. We systematically assessed the system performance in target localization and the ability of resolving two sources in simulations, phantom and in vivo environments. Methods: Multispectral images acquired in single projection were used for the BLT reconstruction. Simulation studies were conducted for single spherical source radius from 0.5 to 3 mm at depth of 3 to 12 mm. The same configuration was also applied for the double sources simulation with source separations varying from 3 to 9more » mm. Experiments were performed in a standalone BLT/CBCT system. Two sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single source at 6 and 9 mm depth, 2 sources with 3 and 5 mm separation at depth of 5 mm or 3 sources in the abdomen were also used to illustrate the in vivo localization capability of the BLT system. Results: Simulation and phantom results illustrate that our BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single source case at 6 and 9 mm depth, respectively. For the 2 sources study, both sources can be distinguished at 3 and 5 mm separations at approximately 1 mm accuracy using 3D BLT but not 2D bioluminescence image. Conclusion: Our BLT/CBCT system can be potentially applied to localize and resolve targets at a wide range of target sizes, depths and separations. The information provided in this study can be instructive to devise margins for BLT-guided irradiation and suggests that the BLT could guide radiation for multiple targets, such as metastasis. Drs. John W. Wong and Iulian I. Iordachita receive royalty payment from a licensing agreement between Xstrahl Ltd and Johns Hopkins University.« less

  8. Constraints on Cumulus Parameterization from Simulations of Observed MJO Events

    NASA Technical Reports Server (NTRS)

    Del Genio, Anthony; Wu, Jingbo; Wolf, Audrey B.; Chen, Yonghua; Yao, Mao-Sung; Kim, Daehyun

    2015-01-01

    Two recent activities offer an opportunity to test general circulation model (GCM) convection and its interaction with large-scale dynamics for observed Madden-Julian oscillation (MJO) events. This study evaluates the sensitivity of the Goddard Institute for Space Studies (GISS) GCM to entrainment, rain evaporation, downdrafts, and cold pools. Single Column Model versions that restrict weakly entraining convection produce the most realistic dependence of convection depth on column water vapor (CWV) during the Atmospheric Radiation Measurement MJO Investigation Experiment at Gan Island. Differences among models are primarily at intermediate CWV where the transition from shallow to deeper convection occurs. GCM 20-day hindcasts during the Year of Tropical Convection that best capture the shallow–deep transition also produce strong MJOs, with significant predictability compared to Tropical Rainfall Measuring Mission data. The dry anomaly east of the disturbance on hindcast day 1 is a good predictor of MJO onset and evolution. Initial CWV there is near the shallow–deep transition point, implicating premature onset of deep convection as a predictor of a poor MJO simulation. Convection weakly moistens the dry region in good MJO simulations in the first week; weakening of large-scale subsidence over this time may also affect MJO onset. Longwave radiation anomalies are weakest in the worst model version, consistent with previous analyses of cloud/moisture greenhouse enhancement as the primary MJO energy source. The authors’ results suggest that both cloud-/moisture-radiative interactions and convection–moisture sensitivity are required to produce a successful MJO simulation.

  9. Point-source inversion techniques

    NASA Astrophysics Data System (ADS)

    Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.

    1982-11-01

    A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.

  10. Impact of the vaginal applicator and dummy pellets on the dosimetry parameters of Cs-137 brachytherapy source.

    PubMed

    Sina, Sedigheh; Faghihi, Reza; Meigooni, Ali S; Mehdizadeh, Simin; Mosleh Shirazi, M Amin; Zehtabian, Mehdi

    2011-05-19

    In this study, dose rate distribution around a spherical 137Cs pellet source, from a low-dose-rate (LDR) Selectron remote afterloading system used in gynecological brachytherapy, has been determined using experimental and Monte Carlo simulation techniques. Monte Carlo simulations were performed using MCNP4C code, for a single pellet source in water medium and Plexiglas, and measurements were performed in Plexiglas phantom material using LiF TLD chips. Absolute dose rate distribution and the dosimetric parameters, such as dose rate constant, radial dose functions, and anisotropy functions, were obtained for a single pellet source. In order to investigate the effect of the applicator and surrounding pellets on dosimetric parameters of the source, the simulations were repeated for six different arrangements with a single active source and five non-active pellets inside central metallic tubing of a vaginal cylindrical applicator. In commercial treatment planning systems (TPS), the attenuation effects of the applicator and inactive spacers on total dose are neglected. The results indicate that this effect could lead to overestimation of the calculated F(r,θ), by up to 7% along the longitudinal axis of the applicator, especially beyond the applicator tip. According to the results obtained in this study, in a real situation in treatment of patients using cylindrical vaginal applicator and using several active pellets, there will be a large discrepancy between the result of superposition and Monte Carlo simulations.

  11. Beam current enhancement of microwave plasma ion source utilizing double-port rectangular cavity resonator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Yuna; Park, Yeong-Shin; Jo, Jong-Gab

    2012-02-15

    Microwave plasma ion source with rectangular cavity resonator has been examined to improve ion beam current by changing wave launcher type from single-port to double-port. The cavity resonators with double-port and single-port wave launchers are designed to get resonance effect at TE-103 mode and TE-102 mode, respectively. In order to confirm that the cavities are acting as resonator, the microwave power for breakdown is measured and compared with the E-field strength estimated from the HFSS (High Frequency Structure Simulator) simulation. Langmuir probe measurements show that double-port cavity enhances central density of plasma ion source by modifying non-uniform plasma density profilemore » of the single-port cavity. Correspondingly, beam current from the plasma ion source utilizing the double-port resonator is measured to be higher than that utilizing single-port resonator. Moreover, the enhancement in plasma density and ion beam current utilizing the double-port resonator is more pronounced as higher microwave power applied to the plasma ion source. Therefore, the rectangular cavity resonator utilizing the double-port is expected to enhance the performance of plasma ion source in terms of ion beam extraction.« less

  12. Beam current enhancement of microwave plasma ion source utilizing double-port rectangular cavity resonator.

    PubMed

    Lee, Yuna; Park, Yeong-Shin; Jo, Jong-Gab; Yang, J J; Hwang, Y S

    2012-02-01

    Microwave plasma ion source with rectangular cavity resonator has been examined to improve ion beam current by changing wave launcher type from single-port to double-port. The cavity resonators with double-port and single-port wave launchers are designed to get resonance effect at TE-103 mode and TE-102 mode, respectively. In order to confirm that the cavities are acting as resonator, the microwave power for breakdown is measured and compared with the E-field strength estimated from the HFSS (High Frequency Structure Simulator) simulation. Langmuir probe measurements show that double-port cavity enhances central density of plasma ion source by modifying non-uniform plasma density profile of the single-port cavity. Correspondingly, beam current from the plasma ion source utilizing the double-port resonator is measured to be higher than that utilizing single-port resonator. Moreover, the enhancement in plasma density and ion beam current utilizing the double-port resonator is more pronounced as higher microwave power applied to the plasma ion source. Therefore, the rectangular cavity resonator utilizing the double-port is expected to enhance the performance of plasma ion source in terms of ion beam extraction.

  13. Sources of suspended-sediment loads in the lower Nueces River watershed, downstream from Lake Corpus Christi to the Nueces Estuary, south Texas, 1958–2010

    USGS Publications Warehouse

    Ockerman, Darwin J.; Heitmuller, Franklin T.; Wehmeyer, Loren L.

    2013-01-01

    During 2010, additional suspended-sediment data were collected during selected runoff events to provide new data for model testing and to help better understand the sources of suspended-sediment loads. The model was updated and used to estimate and compare sediment yields from each of 64 subwatersheds comprising the lower Nueces River watershed study area for three selected runoff events: November 20-21, 2009, September 7-8, 2010, and September 20-21, 2010. These three runoff events were characterized by heavy rainfall centered near the study area and during which minimal streamflow and suspended-sediment load entered the lower Nueces River upstream from Wesley E. Seale Dam. During all three runoff events, model simulations showed that the greatest sediment yields originated from the subwatersheds, which were largely cropland. In particular, the Bayou Creek subwatersheds were major contributors of suspended-sediment load to the lower Nueces River during the selected runoff events. During the November 2009 runoff event, high suspended-sediment concentrations in the Nueces River water withdrawn for the City of Corpus Christi public-water supply caused problems during the water-treatment process, resulting in failure to meet State water-treatment standards for turbidity in drinking water. Model simulations of the November 2009 runoff event showed that the Bayou Creek subwatersheds were the primary source of suspended-sediment loads during that runoff event.

  14. Simulations and Characteristics of Large Solar Events Propagating Throughout the Heliosphere and Beyond (Invited)

    NASA Astrophysics Data System (ADS)

    Intriligator, D. S.; Sun, W.; Detman, T. R.; Dryer, Ph D., M.; Intriligator, J.; Deehr, C. S.; Webber, W. R.; Gloeckler, G.; Miller, W. D.

    2015-12-01

    Large solar events can have severe adverse global impacts at Earth. These solar events also can propagate throughout the heliopshere and into the interstellar medium. We focus on the July 2012 and Halloween 2003 solar events. We simulate these events starting from the vicinity of the Sun at 2.5 Rs. We compare our three dimensional (3D) time-dependent simulations to available spacecraft (s/c) observations at 1 AU and beyond. Based on the comparisons of the predictions from our simulations with in-situ measurements we find that the effects of these large solar events can be observed in the outer heliosphere, the heliosheath, and even into the interstellar medium. We use two simulation models. The HAFSS (HAF Source Surface) model is a kinematic model. HHMS-PI (Hybrid Heliospheric Modeling System with Pickup protons) is a numerical magnetohydrodynamic solar wind (SW) simulation model. Both HHMS-PI and HAFSS are ideally suited for these analyses since starting at 2.5 Rs from the Sun they model the slowly evolving background SW and the impulsive, time-dependent events associated with solar activity. Our models naturally reproduce dynamic 3D spatially asymmetric effects observed throughout the heliosphere. Pre-existing SW background conditions have a strong influence on the propagation of shock waves from solar events. Time-dependence is a crucial aspect of interpreting s/c data. We show comparisons of our simulation results with STEREO A, ACE, Ulysses, and Voyager s/c observations.

  15. : A Scalable and Transparent System for Simulating MPI Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S

    2010-01-01

    is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less

  16. Single sources in the low-frequency gravitational wave sky: properties and time to detection by pulsar timing arrays

    NASA Astrophysics Data System (ADS)

    Kelley, Luke Zoltan; Blecha, Laura; Hernquist, Lars; Sesana, Alberto; Taylor, Stephen R.

    2018-06-01

    We calculate the properties, occurrence rates and detection prospects of individually resolvable `single sources' in the low-frequency gravitational wave (GW) spectrum. Our simulations use the population of galaxies and massive black hole binaries from the Illustris cosmological hydrodynamic simulations, coupled to comprehensive semi-analytic models of the binary merger process. Using mock pulsar timing arrays (PTA) with, for the first time, varying red-noise models, we calculate plausible detection prospects for GW single sources and the stochastic GW background (GWB). Contrary to previous results, we find that single sources are at least as detectable as the GW background. Using mock PTA, we find that these `foreground' sources (also `deterministic'/`continuous') are likely to be detected with ˜20 yr total observing baselines. Detection prospects, and indeed the overall properties of single sources, are only moderately sensitive to binary evolution parameters - namely eccentricity and environmental coupling, which can lead to differences of ˜5 yr in times to detection. Red noise has a stronger effect, roughly doubling the time to detection of the foreground between a white-noise only model (˜10-15 yr) and severe red noise (˜20-30 yr). The effect of red noise on the GWB is even stronger, suggesting that single source detections may be more robust. We find that typical signal-to-noise ratios for the foreground peak near f = 0.1 yr-1, and are much less sensitive to the continued addition of new pulsars to PTA.

  17. Reliability Assessment of GaN Power Switches

    DTIC Science & Technology

    2015-04-17

    Possibilities for single event burnout testing were examined as well. Device simulation under the conditions of some of the testing was performed on...reverse-bias (HTRB) and single electron burnout (SEE) tests. 8. Refine test structures, circuits, and procedures, and, if possible, develop

  18. Tidal Disruption Events Across Cosmic Time

    NASA Astrophysics Data System (ADS)

    Fialkov, Anastasia; Loeb, Abraham

    2017-01-01

    Tidal disruption events (TDEs) of stars by single or binary super-massive black holes illuminate the environment around quiescent black holes in galactic nuclei allowing to probe dorment black holes. We predict the TDE rates expected to be detected by next-generation X-ray surveys. We include events sourced by both single and binary super-massive black holes assuming that 10% of TDEs lead to the formation of relativistic jets and are therefore observable to higher redshifts. Assigning the Eddington luminosity to each event, we show that if the occupation fraction of intermediate black holes is high, more than 90% of the brightest TDE might be associated with merging black holes which are potential sources for eLISA. Next generation telescopes with improved sensitivities should probe dim local TDE events as well as bright events at high redshifts. We show that an instrument which is 50 times more sensitive than the Swift Burst Alert Telescope (BAT) is expected to trigger ~10 times more events than BAT. Majority of these events originate at low redshifts (z<0.5) if the occupation fraction of IMBHs is high and at high-redshift (z>2) if it is low.

  19. Optimization of Single-Sided Charge-Sharing Strip Detectors

    NASA Technical Reports Server (NTRS)

    Hamel, L.A.; Benoit, M.; Donmez, B.; Macri, J. R.; McConnell, M. L.; Ryan, J. M.; Narita, T.

    2006-01-01

    Simulation of the charge sharing properties of single-sided CZT strip detectors with small anode pads are presented. The effect of initial event size, carrier repulsion, diffusion, drift, trapping and detrapping are considered. These simulations indicate that such a detector with a 150 m pitch will provide good charge sharing between neighboring pads. This is supported by a comparison of simulations and measurements for a similar detector with a coarser pitch of 225 m that could not provide sufficient sharing. The performance of such a detector used as a gamma-ray imager is discussed.

  20. A Rotating Scatter Mask for Inexpensive Gamma-Ray Imaging in Orphan Source Search: Simulation Results

    NASA Astrophysics Data System (ADS)

    FitzGerald, Jack G. M.

    2015-02-01

    The Rotating Scatter Mask (RSM) system is an inexpensive retrofit that provides imaging capabilities to scintillating detectors. Unlike traditional collimator systems that primarily absorb photons in order to form an image, this system primarily scatters the photons. Over a single rotation, there is a unique, smooth response curve for each defined source position. Testing was conducted using MCNPX simulations. Image reconstruction was performed using a chi-squared reconstruction technique. A simulated 100 uCi, Cs-137 source at 10 meters was detected after a single, 50-second rotation when a uniform terrestrial background was present. A Cs-137 extended source was also tested. The RSM field-of-view is 360 degrees azimuthally as well as 54 degrees above and 54 degrees below the horizontal plane. Since the RSM is built from polyethylene, the overall cost and weight of the system is low. The system was designed to search for lost or stolen radioactive material, also known as the orphan source problem.

  1. Discrete-Event Simulation Unmasks the Quantum Cheshire Cat

    NASA Astrophysics Data System (ADS)

    Michielsen, Kristel; Lippert, Thomas; Raedt, Hans De

    2017-05-01

    It is shown that discrete-event simulation accurately reproduces the experimental data of a single-neutron interferometry experiment [T. Denkmayr {\\sl et al.}, Nat. Commun. 5, 4492 (2014)] and provides a logically consistent, paradox-free, cause-and-effect explanation of the quantum Cheshire cat effect without invoking the notion that the neutron and its magnetic moment separate. Describing the experimental neutron data using weak-measurement theory is shown to be useless for unravelling the quantum Cheshire cat effect.

  2. Hierarchical CAD Tools for Radiation Hardened Mixed Signal Electronic Circuits

    DTIC Science & Technology

    2005-01-28

    11 Figure 3: Schematic of Analog and Digital Components 12 Figure 4: Dose Rate Syntax 14 Figure 5: Single Event Effects (SEE) Syntax 15 Figure 6...Harmony-AMS simulation of a Digital Phase Locked Loop 19 Figure 10: SEE results from DPLL Simulation 20 Figure 11: Published results used for validation...analog and digital circuitry. Combining the analog and digital elements onto a single chip has several advantages, but also creates unique challenges

  3. Whole-Body Single-Bed Time-of-Flight RPC-PET: Simulation of Axial and Planar Sensitivities With NEMA and Anthropomorphic Phantoms

    NASA Astrophysics Data System (ADS)

    Crespo, Paulo; Reis, João; Couceiro, Miguel; Blanco, Alberto; Ferreira, Nuno C.; Marques, Rui Ferreira; Martins, Paulo; Fonte, Paulo

    2012-06-01

    A single-bed, whole-body positron emission tomograph based on resistive plate chambers has been proposed (RPC-PET). An RPC-PET system with an axial field-of-view (AFOV) of 2.4 m has been shown in simulation to have higher system sensitivity using the NEMA NU2-1994 protocol than commercial PET scanners. However, that protocol does not correlate directly with lesion detectability. The latter is better correlated with the planar (slice) sensitivity, obtained with a NEMA NU2-2001 line-source phantom. After validation with published data for the GE Advance, Siemens TruePoint and TrueV, we study by simulation their axial sensitivity profiles, comparing results with RPC-PET. Planar sensitivities indicate that RPC-PET is expected to outperform 16-cm (22-cm) AFOV scanners by a factor 5.8 (3.0) for 70-cm-long scans. For 1.5-m scans (head to mid-legs), the sensitivity gain increases to 11.7 (6.7). Yet, PET systems with large AFOV provide larger coverage but also larger attenuation in the object. We studied these competing effects with both spherical- and line-sources immersed in a 27-cm-diameter water cylinder. For 1.5-m-long scans, the planar sensitivity drops one order of magnitude in all scanners, with RPC-PET outperforming 16-cm (22-cm) AFOV scanners by a factor 9.2 (5.3) without considering the TOF benefit. A gain in the effective sensitivity is expected with TOF iterative reconstruction. Finally, object scatter in an anthropomorphic phantom is similar for RPC-PET and modern, scintillator-based scanners, although RPC-PET benefits further if its TOF information is utilized to exclude scatter events occurring outside the anthropomorphic phantom.

  4. Implementation of warm-cloud processes in a source-oriented WRF/Chem model to study the effect of aerosol mixing state on fog formation in the Central Valley of California

    NASA Astrophysics Data System (ADS)

    Lee, H.-H.; Chen, S.-H.; Kleeman, M. J.; Zhang, H.; DeNero, S. P.; Joe, D. K.

    2015-11-01

    The source-oriented Weather Research and Forecasting chemistry model (SOWC) was modified to include warm cloud processes and applied to investigate how aerosol mixing states influence fog formation and optical properties in the atmosphere. SOWC tracks a 6-dimensional chemical variable (X, Z, Y, Size Bins, Source Types, Species) through an explicit simulation of atmospheric chemistry and physics. A source-oriented cloud condensation nuclei module was implemented into the SOWC model to simulate warm clouds using the modified two-moment Purdue Lin microphysics scheme. The Goddard shortwave and longwave radiation schemes were modified to interact with source-oriented aerosols and cloud droplets so that aerosol direct and indirect effects could be studied. The enhanced SOWC model was applied to study a fog event that occurred on 17 January 2011, in the Central Valley of California. Tule fog occurred because an atmospheric river effectively advected high moisture into the Central Valley and nighttime drainage flow brought cold air from mountains into the valley. The SOWC model produced reasonable liquid water path, spatial distribution and duration of fog events. The inclusion of aerosol-radiation interaction only slightly modified simulation results since cloud optical thickness dominated the radiation budget in fog events. The source-oriented mixture representation of particles reduced cloud droplet number relative to the internal mixture approach that artificially coats hydrophobic particles with hygroscopic components. The fraction of aerosols activating into CCN at a supersaturation of 0.5 % in the Central Valley decreased from 94 % in the internal mixture model to 80 % in the source-oriented model. This increased surface energy flux by 3-5 W m-2 and surface temperature by as much as 0.25 K in the daytime.

  5. Numerical modeling of coronal mass ejections based on various pre-event model atmospheres

    NASA Technical Reports Server (NTRS)

    Suess, S. T.; Wang, A. H.; Wu, S. T.; Poletto, G.

    1994-01-01

    We examine how the initial state (pre-event corona) affects the numerical MHD simulation for a coronal mass ejection (CME). Earlier simulations based on a pre-event corona with a homogeneous density and temperature distribution at lower boundary (i.e. solar surface) have been used to analyze the role of streamer properties in determining the characteristics of loop-like transients. The present paper extends these studies to show how a broader class of global coronal properties leads not only to different types of CME's, but also modifies the adjacent quiet corona and/or coronal holes. We consider four pre-event coronal cases: (1) Constant boundary conditions and a polytropic gas with gamma = 1.05; (2) Non-constant (latitude dependent) boundary conditions and a polytropic gas with gamma = 1.05; (3) Constant boundary conditions with a volumetric energy source and gamma = 1.67; (4) Non-constant (latitude dependent) boundary conditions with a volumetric energy source and gamma = 1.67. In all models, the pre-event magnetic fields separate the corona into closed field regions (streamers) and open field regions. The CME's initiation is simulated by introducing at the base of the corona, within the streamer region, a standard pressure pulse and velocity change. Boundary values are determined using MHD characteristic theory. The simulations show how different CME's, including loop-like transients, clouds, and bright rays, might occur. There are significant new features in comparison to published results. We conclude that the pre-event corona is a crucial factor in dictating CME's properties.

  6. Numerical Modeling of Coronal Mass Ejections Based on Various Pre-event Model Atmospheres

    NASA Technical Reports Server (NTRS)

    Wang, A. H.; Wu, S. T.; Suess, S. T.; Poletto, G.

    1995-01-01

    We examine how the initial state (pre-event corona) affects the numerical MHD simulation for a coronal mass ejection (CME). Earlier simulations based on a pre-event corona with a homogeneous density and temperature distribution, at the lower boundary (i.e., solar surface) have been used to analyze the role of streamer properties in determining the characteristics of loop-like transients. The present paper extends these studies to show how a broader class of global coronal properties leads not only to different types of CME's, but also modifies the adjacent quiet corona and/or coronal holes. We consider four pre-event coronal cases: (1) constant boundary conditions and a polytropic gas with gamma = 1.05; (2) non-constant (latitude dependent) boundary conditions and a polytropic gas with gamma = 1.05; (3) constant boundary conditions with a volumetric energy source and gamma = 1.67; (4) non-constant (latitude dependent) boundary conditions with a volumetric energy source and gamma = 1.67. In all models, the pre-event magnetic fields separate the corona into closed field regions (streamers) and open field regions. The CME's initiation is simulated by introducing at the base of the corona, within the streamer region, a standard pressure pulse and velocity change. Boundary values are determined using magnetohydrodynamic (MHD) characteristic theory. The simulations show how different CME's, including loop-like transients, clouds and bright rays, might occur. There are significant new features in comparison to published results. We conclude that the pre-event corona is a crucial factor in dictating CME's properties.

  7. STORM WATER MANAGEMENT MODEL USER'S MANUAL VERSION 5.0

    EPA Science Inventory

    The EPA Storm Water Management Model (SWMM) is a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. SWMM was first developed in 1971 and has undergone several major upgrade...

  8. Storm Water Management Model Reference Manual Volume I, Hydrology

    EPA Science Inventory

    SWMM is a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. The runoff component of SWMM operates on a collection of subcatchment areas that receive precipitation and gene...

  9. Storm Water Management Model Reference Manual Volume II – Hydraulics

    EPA Science Inventory

    SWMM is a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. The runoff component of SWMM operates on a collection of subcatchment areas that receive precipitation and gene...

  10. Experimental and simulation studies of neutron-induced single-event burnout in SiC power diodes

    NASA Astrophysics Data System (ADS)

    Shoji, Tomoyuki; Nishida, Shuichi; Hamada, Kimimori; Tadano, Hiroshi

    2014-01-01

    Neutron-induced single-event burnouts (SEBs) of silicon carbide (SiC) power diodes have been investigated by white neutron irradiation experiments and transient device simulations. It was confirmed that a rapid increase in lattice temperature leads to formation of crown-shaped aluminum and cracks inside the device owing to expansion stress when the maximum lattice temperature reaches the sublimation temperature. SEB device simulation indicated that the peak lattice temperature is located in the vicinity of the n-/n+ interface and anode contact, and that the positions correspond to a hammock-like electric field distribution caused by the space charge effect. Moreover, the locations of the simulated peak lattice temperature agree closely with the positions of the observed destruction traces. Furthermore, it was theoretically demonstrated that the period of temperature increase of a SiC power device is two orders of magnitude less than that of a Si power device, using a thermal diffusion equation.

  11. Experimental and numerical simulation of a rotor/stator interaction event localized on a single blade within an industrial high-pressure compressor

    NASA Astrophysics Data System (ADS)

    Batailly, Alain; Agrapart, Quentin; Millecamps, Antoine; Brunel, Jean-François

    2016-08-01

    This contribution addresses a confrontation between the experimental simulation of a rotor/stator interaction case initiated by structural contacts with numerical predictions made with an in-house numerical strategy. Contrary to previous studies carried out within the low-pressure compressor of an aircraft engine, this interaction is found to be non-divergent: high amplitudes of vibration are experimentally observed and numerically predicted over a short period of time. An in-depth analysis of experimental data first allows for a precise characterization of the interaction as a rubbing event involving the first torsional mode of a single blade. Numerical results are in good agreement with experimental observations: the critical angular speed, the wear patterns on the casing as well as the blade dynamics are accurately predicted. Through out the article, the in-house numerical strategy is also confronted to another numerical strategy that may be found in the literature for the simulation of rubbing events: key differences are underlined with respect to the prediction of non-linear interaction phenomena.

  12. Localizing gravitational wave sources with single-baseline atom interferometers

    NASA Astrophysics Data System (ADS)

    Graham, Peter W.; Jung, Sunghoon

    2018-02-01

    Localizing sources on the sky is crucial for realizing the full potential of gravitational waves for astronomy, astrophysics, and cosmology. We show that the midfrequency band, roughly 0.03 to 10 Hz, has significant potential for angular localization. The angular location is measured through the changing Doppler shift as the detector orbits the Sun. This band maximizes the effect since these are the highest frequencies in which sources live for several months. Atom interferometer detectors can observe in the midfrequency band, and even with just a single baseline they can exploit this effect for sensitive angular localization. The single-baseline orbits around the Earth and the Sun, causing it to reorient and change position significantly during the lifetime of the source, and making it similar to having multiple baselines/detectors. For example, atomic detectors could predict the location of upcoming black hole or neutron star merger events with sufficient accuracy to allow optical and other electromagnetic telescopes to observe these events simultaneously. Thus, midband atomic detectors are complementary to other gravitational wave detectors and will help complete the observation of a broad range of the gravitational spectrum.

  13. Low-frequency source parameters of twelve large earthquakes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Harabaglia, Paolo

    1993-01-01

    A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.

  14. How to resolve microsecond current fluctuations in single ion channels: The power of beta distributions

    PubMed Central

    Schroeder, Indra

    2015-01-01

    Abstract A main ingredient for the understanding of structure/function correlates of ion channels is the quantitative description of single-channel gating and conductance. However, a wealth of information provided from fast current fluctuations beyond the temporal resolution of the recording system is often ignored, even though it is close to the time window accessible to molecular dynamics simulations. This kind of current fluctuations provide a special technical challenge, because individual opening/closing or blocking/unblocking events cannot be resolved, and the resulting averaging over undetected events decreases the single-channel current. Here, I briefly summarize the history of fast-current fluctuation analysis and focus on the so-called “beta distributions.” This tool exploits characteristics of current fluctuation-induced excess noise on the current amplitude histograms to reconstruct the true single-channel current and kinetic parameters. A guideline for the analysis and recent applications demonstrate that a construction of theoretical beta distributions by Markov Model simulations offers maximum flexibility as compared to analytical solutions. PMID:26368656

  15. [Moisture sources of Guangzhou during the freezing disaster period in 2008 indicated by the stable isotopes of precipitation].

    PubMed

    Liao, Cong-Yun; Zhong, Wei; Ma, Qiao-Hong; Xue, Ji-Bin; Yin, Huan-Ling; Long, Kun

    2012-04-01

    From April 2007 to June 2008, stable isotope samples of all single precipitations were collected at the intervals of 5-30 min. We choose five single precipitations in Guangzhou city that happened during the freezing disaster event (from Jan. 10 to Feb. 2, 2008) in South China, aiming to investigate the variation of stable isotopes under the extremely climatic conditions and its controlling factors. The results show that the values of deltaD and delta18O in precipitations drop significantly during this freezing disaster. The analyses of the d-excess and LMWL indicate the abnormal oceanic moisture sources. Air mass trajectory tracking shows the moisture sources were characterized by the mixture of inland and marine water vapors during the freezing disaster peak period, while the long-distance oceanic moisture sources is the dominate one. Changes of stable isotope in single rain event during the freezing disaster shows three different trends, i. e, Up trend, V-shaped trend and W-shaped trend, which may be resulted from the re-evaporation, re-condensation and the related precipitation types in association with the different vapor sources and precipitation conditions.

  16. The Fogo's Collapse-triggered Megatsunami: Evidence-calibrated Numerical Simulations of Tsunamigenic Potential and Coastal Impact

    NASA Astrophysics Data System (ADS)

    Omira, Rachid; Ramalho, Ricardo S.; Quartau, Rui; Ramalho, Inês; Madeira, José; Baptista, Maria Ana

    2017-04-01

    Volcanic Ocean Islands are very prominent and dynamic features involving several constructive and destructive phases during their life-cycles. Large-scale gravitational flank collapses are one of the most destructive processes and can present a major source of hazard, since it has been shown that these events are capable of triggering megatsunamis with significant coastal impact. The Fogo volcanic island, Cape Verde, presents evidence for giant edifice mass-wasting, as attested by both onshore and offshore evidence. A recent study by Ramalho et al. (2015) revealed the presence of tsunamigenic deposits that attest the generation of a megatsunami with devastating impact on the nearby Santiago Island, following Fogo's catastrophic collapse. Evidence from northern Santiago implies local minimum run-ups of 270 m, providing a unique physical framework to test collapse-triggered tsunami numerical simulations. In this study, we investigate the tsunamigenic potential associated with Fogo's flank collapse, and its impact on the Islands of the Cape Verde archipelago using field evidence-calibrated numerical simulations. We first reconstruct the pre-event island morphology, and then employ a multilayer numerical model to simulate the flank failure flow towards and under the sea, the ensuing tsunami generation, propagation and coastal impact. We use a digital elevation model that considers the coastline configuration and the sea level at the time of the event. Preliminary numerical modeling results suggest that collapsed volumes of 90-150 km3, in one single event, generate numerical solutions that are compatible with field evidence. Our simulations suggest that Fogo's collapse triggered a megatsunami that reached the coast of Santiago in 8 min, and with wave heights in excess of 250 m. The tsunami waves propagated with lower amplitudes towards the Cape Verde Islands located northward of Fogo. This study will contribute to more realistically assess the scale of risks associated with these extremely rare but very high impact natural disasters. This work is supported by the EU project ASTARTE -Grant 603839, 7th FP (ENV.2013, 6.4-3), the EU project TSUMAPS-NEAM -Agreement Number: ECHO/SUB/2015/718568/PREV26, and the IF/01641/2015 MEGAWAVE - FCT project.

  17. A design of calibration single star simulator with adjustable magnitude and optical spectrum output system

    NASA Astrophysics Data System (ADS)

    Hu, Guansheng; Zhang, Tao; Zhang, Xuan; Shi, Gentai; Bai, Haojie

    2018-03-01

    In order to achieve multi-color temperature and multi-magnitude output, magnitude and temperature can real-time adjust, a new type of calibration single star simulator was designed with adjustable magnitude and optical spectrum output in this article. xenon lamp and halogen tungsten lamp were used as light source. The control of spectrum band and temperature of star was realized with different multi-beam narrow band spectrum with light of varying intensity. When light source with different spectral characteristics and color temperature go into the magnitude regulator, the light energy attenuation were under control by adjusting the light luminosity. This method can completely satisfy the requirements of calibration single star simulator with adjustable magnitude and optical spectrum output in order to achieve the adjustable purpose of magnitude and spectrum.

  18. Kernel PLS Estimation of Single-trial Event-related Potentials

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.

    2004-01-01

    Nonlinear kernel partial least squaes (KPLS) regressior, is a novel smoothing approach to nonparametric regression curve fitting. We have developed a KPLS approach to the estimation of single-trial event related potentials (ERPs). For improved accuracy of estimation, we also developed a local KPLS method for situations in which there exists prior knowledge about the approximate latency of individual ERP components. To assess the utility of the KPLS approach, we compared non-local KPLS and local KPLS smoothing with other nonparametric signal processing and smoothing methods. In particular, we examined wavelet denoising, smoothing splines, and localized smoothing splines. We applied these methods to the estimation of simulated mixtures of human ERPs and ongoing electroencephalogram (EEG) activity using a dipole simulator (BESA). In this scenario we considered ongoing EEG to represent spatially and temporally correlated noise added to the ERPs. This simulation provided a reasonable but simplified model of real-world ERP measurements. For estimation of the simulated single-trial ERPs, local KPLS provided a level of accuracy that was comparable with or better than the other methods. We also applied the local KPLS method to the estimation of human ERPs recorded in an experiment on co,onitive fatigue. For these data, the local KPLS method provided a clear improvement in visualization of single-trial ERPs as well as their averages. The local KPLS method may serve as a new alternative to the estimation of single-trial ERPs and improvement of ERP averages.

  19. Patterns of ancestry and genetic diversity in reintroduced populations of the slimy sculpin: Implications for conservation

    USGS Publications Warehouse

    Huff, David D.; Miller, Loren M.; Vondracek, Bruce C.

    2010-01-01

    Reintroductions are a common approach for preserving intraspecific biodiversity in fragmented landscapes. However, they may exacerbate the reduction in genetic diversity initially caused by population fragmentation because the effective population size of reintroduced populations is often smaller and reintroduced populations also tend to be more geographically isolated than native populations. Mixing genetically divergent sources for reintroduction purposes is a practice intended to increase genetic diversity. We documented the outcome of reintroductions from three mixed sources on the ancestral composition and genetic variation of a North American fish, the slimy sculpin (Cottus cognatus). We used microsatellite markers to evaluate allelic richness and heterozygosity in the reintroduced populations relative to computer simulated expectations. Sculpins in reintroduced populations exhibited higher levels of heterozygosity and allelic richness than any single source, but only slightly higher than the single most genetically diverse source population. Simulations intended to mimic an ideal scenario for maximizing genetic variation in the reintroduced populations also predicted increases, but they were only moderately greater than the most variable source population. We found that a single source contributed more than the other two sources at most reintroduction sites. We urge caution when choosing whether to mix source populations in reintroduction programs. Genetic characteristics of candidate source populations should be evaluated prior to reintroduction if feasible. When combined with knowledge of the degree of genetic distinction among sources, simulations may allow the genetic diversity benefits of mixing populations to be weighed against the risks of outbreeding depression in reintroduced and nearby populations.

  20. Patterns of ancestry and genetic diversity in reintroduced populations of the slimy sculpin: Implications for conservation

    USGS Publications Warehouse

    Huff, D.D.; Miller, L.M.; Vondracek, B.

    2010-01-01

    Reintroductions are a common approach for preserving intraspecific biodiversity in fragmented landscapes. However, they may exacerbate the reduction in genetic diversity initially caused by population fragmentation because the effective population size of reintroduced populations is often smaller and reintroduced populations also tend to be more geographically isolated than native populations. Mixing genetically divergent sources for reintroduction purposes is a practice intended to increase genetic diversity. We documented the outcome of reintroductions from three mixed sources on the ancestral composition and genetic variation of a North American fish, the slimy sculpin (Cottus cognatus). We used microsatellite markers to evaluate allelic richness and heterozygosity in the reintroduced populations relative to computer simulated expectations. Sculpins in reintroduced populations exhibited higher levels of heterozygosity and allelic richness than any single source, but only slightly higher than the single most genetically diverse source population. Simulations intended to mimic an ideal scenario for maximizing genetic variation in the reintroduced populations also predicted increases, but they were only moderately greater than the most variable source population. We found that a single source contributed more than the other two sources at most reintroduction sites. We urge caution when choosing whether to mix source populations in reintroduction programs. Genetic characteristics of candidate source populations should be evaluated prior to reintroduction if feasible. When combined with knowledge of the degree of genetic distinction among sources, simulations may allow the genetic diversity benefits of mixing populations to be weighed against the risks of outbreeding depression in reintroduced and nearby populations. ?? 2010 US Government.

  1. Assessment of Critical Events Corridors through Multivariate Cascading Outages Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarov, Yuri V.; Samaan, Nader A.; Diao, Ruisheng

    2011-10-17

    Massive blackouts of electrical power systems in North America over the past decade has focused increasing attention upon ways to identify and simulate network events that may potentially lead to widespread network collapse. This paper summarizes a method to simulate power-system vulnerability to cascading failures to a supplied set of initiating events synonymously termed as Extreme Events. The implemented simulation method is currently confined to simulating steady state power-system response to a set of extreme events. The outlined method of simulation is meant to augment and provide a new insight into bulk power transmission network planning that at present remainsmore » mainly confined to maintaining power system security for single and double component outages under a number of projected future network operating conditions. Although one of the aims of this paper is to demonstrate the feasibility of simulating network vulnerability to cascading outages, a more important goal has been to determine vulnerable parts of the network that may potentially be strengthened in practice so as to mitigate system susceptibility to cascading failures. This paper proposes to demonstrate a systematic approach to analyze extreme events and identify vulnerable system elements that may be contributing to cascading outages. The hypothesis of critical events corridors is proposed to represent repeating sequential outages that can occur in the system for multiple initiating events. The new concept helps to identify system reinforcements that planners could engineer in order to 'break' the critical events sequences and therefore lessen the likelihood of cascading outages. This hypothesis has been successfully validated with a California power system model.« less

  2. Topological events in single molecules of E. coli DNA confined in nanochannels

    PubMed Central

    Reifenberger, Jeffrey G.; Dorfman, Kevin D.; Cao, Han

    2015-01-01

    We present experimental data concerning potential topological events such as folds, internal backfolds, and/or knots within long molecules of double-stranded DNA when they are stretched by confinement in a nanochannel. Genomic DNA from E. coli was labeled near the ‘GCTCTTC’ sequence with a fluorescently labeled dUTP analog and stained with the DNA intercalator YOYO. Individual long molecules of DNA were then linearized and imaged using methods based on the NanoChannel Array technology (Irys® System) available from BioNano Genomics. Data were collected on 189,153 molecules of length greater than 50 kilobases. A custom code was developed to search for abnormal intensity spikes in the YOYO backbone profile along the length of individual molecules. By correlating the YOYO intensity spikes with the aligned barcode pattern to the reference, we were able to correlate the bright intensity regions of YOYO with abnormal stretching in the molecule, which suggests these events were either a knot or a region of internal backfolding within the DNA. We interpret the results of our experiments involving molecules exceeding 50 kilobases in the context of existing simulation data for relatively short DNA, typically several kilobases. The frequency of these events is lower than the predictions from simulations, while the size of the events is larger than simulation predictions and often exceeds the molecular weight of the simulated molecules. We also identified DNA molecules that exhibit large, single folds as they enter the nanochannels. Overall, topological events occur at a low frequency (~7% of all molecules) and pose an easily surmountable obstacle for the practice of genome mapping in nanochannels. PMID:25991508

  3. THE STORM WATER MANAGEMENT MODEL (SWMM) AND RELATED WATERSHED TOOLS DEVELOPMENT

    EPA Science Inventory

    The Storm Water Management Model (SWMM) is a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. It is the only publicly available model capable of performing a comprehensiv...

  4. Storm Water Management Model Reference Manual Volume III – Water Quality

    EPA Science Inventory

    SWMM is a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. The runoff component of SWMM operates on a collection of subcatchment areas that receive precipitation and gene...

  5. Episodic inflation events at Akutan Volcano, Alaska, during 2005-2017

    NASA Astrophysics Data System (ADS)

    Ji, Kang Hyeun; Yun, Sang-Ho; Rim, Hyoungrea

    2017-08-01

    Detection of weak volcano deformation helps constrain characteristics of eruption cycles. We have developed a signal detection technique, called the Targeted Projection Operator (TPO), to monitor surface deformation with Global Positioning System (GPS) data. We have applied the TPO to GPS data collected at Akutan Volcano from June 2005 to March 2017 and detected four inflation events that occurred in 2008, 2011, 2014, and 2016 with inflation rates of about 8-22 mm/yr above the background trend at a near-source site AV13. Numerical modeling suggests that the events should be driven by closely located sources or a single source in a shallow magma chamber at a depth of about 4 km. The inflation events suggest that magma has episodically accumulated in a shallow magma chamber.

  6. Numerical Relativity Simulations for Black Hole Merger Astrophysics

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2010-01-01

    Massive black hole mergers are perhaps the most energetic astronomical events, establishing their importance as gravitational wave sources for LISA, and also possibly leading to observable influences on their local environments. Advances in numerical relativity over the last five years have fueled the development of a rich physical understanding of general relativity's predictions for these events. Z will overview the understanding of these event emerging from numerical simulation studies. These simulations elucidate the pre-merger dynamics of the black hole binaries, the consequent gravitational waveform signatures ' and the resulting state, including its kick velocity, for the final black hole produced by the merger. Scenarios are now being considered for observing each of these aspects of the merger, involving both gravitational-wave and electromagnetic astronomy.

  7. Single ICMEs and Complex Transient Structures in the Solar Wind in 2010 - 2011

    NASA Astrophysics Data System (ADS)

    Rodkin, D.; Slemzin, V.; Zhukov, A. N.; Goryaev, F.; Shugay, Y.; Veselovsky, I.

    2018-05-01

    We analyze the statistics, solar sources, and properties of interplanetary coronal mass ejections (ICMEs) in the solar wind. The total number of coronal mass ejections (CMEs) registered in the Coordinated Data Analysis Workshops catalog (CDAW) during the first eight years of Cycle 24 was 61% larger than in the same period of Cycle 23, but the number of X-ray flares registered by the Geostationary Operational Environmental Satellite (GOES) was 20 % smaller because the solar activity was lower. The total number of ICMEs in the given period of Cycle 24 in the Richardson and Cane list was 29% smaller than in Cycle 23, which may be explained by a noticeable number of non-classified ICME-like events in the beginning of Cycle 24. For the period January 2010 - August 2011, we identify solar sources of the ICMEs that are included in the Richardson and Cane list. The solar sources of ICME were determined from coronagraph observations of the Earth-directed CMEs, supplemented by modeling of their propagation in the heliosphere using kinematic models (a ballistic and drag-based model). A detailed analysis of the ICME solar sources in the period under study showed that in 11 cases out of 23 (48%), the observed ICME could be associated with two or more sources. For multiple-source events, the resulting solar wind disturbances can be described as complex (merged) structures that are caused by stream interactions, with properties depending on the type of the participating streams. As a reliable marker to identify interacting streams and their sources, we used the plasma ion composition because it freezes in the low corona and remains unchanged in the heliosphere. According to the ion composition signatures, we classify these cases into three types: complex ejecta originating from weak and strong CME-CME interactions, as well as merged interaction regions (MIRs) originating from the CME high-speed stream (HSS) interactions. We describe temporal profiles of the ion composition for the single-source and multi-source solar wind structures and compared them with the ICME signatures determined from the kinematic and magnetic field parameters of the solar wind. In single-source events, the ion charge state, as a rule, has a one-peak enhancement with an average duration of about one day, which is similar to the mean ICME duration of 1.12 days derived from the Richardson and Cane list. In the multi-source events, the total profile of the ion charge state consists of a sequence of enhancements that is associated with the interaction between the participating streams. On average, the total duration of the complex structures that appear as a result of the CME-CME and CME-HSS interactions as determined from their ion composition is 2.4 days, which is more than twice longer than that of the single-source events.

  8. egs_brachy: a versatile and fast Monte Carlo code for brachytherapy

    NASA Astrophysics Data System (ADS)

    Chamberland, Marc J. P.; Taylor, Randle E. P.; Rogers, D. W. O.; Thomson, Rowan M.

    2016-12-01

    egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm)3 voxels) and eye plaque (with (1 mm)3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.

  9. egs_brachy: a versatile and fast Monte Carlo code for brachytherapy.

    PubMed

    Chamberland, Marc J P; Taylor, Randle E P; Rogers, D W O; Thomson, Rowan M

    2016-12-07

    egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm) 3 voxels) and eye plaque (with (1 mm) 3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.

  10. Estimating winter wheat phenological parameters: Implications for crop modeling

    USDA-ARS?s Scientific Manuscript database

    Crop parameters, such as the timing of developmental events, are critical for accurate simulation results in crop simulation models, yet uncertainty often exists in determining the parameters. Factors contributing to the uncertainty include: a) sources of variation within a plant (i.e., within diffe...

  11. Pulse shape discrimination for background rejection in germanium gamma-ray detectors

    NASA Technical Reports Server (NTRS)

    Feffer, P. T.; Smith, D. M.; Campbell, R. D.; Primbsch, J. H.; Lin, R. P.

    1989-01-01

    A pulse-shape discrimination (PSD) technique is developed to reject the beta-decay background resulting from activation of Ge gamma-ray detectors by cosmic-ray secondaries. These beta decays are a major source of background at 0.2-2 MeV energies in well shielded Ge detector systems. The technique exploits the difference between the detected current pulse shapes of single- and multiple-site energy depositions within the detector: beta decays are primarily single-site events, while photons at these energies typically Compton scatter before being photoelectrically absorbed to produce multiple-site events. Depending upon the amount of background due to sources other than beta decay, PSD can more than double the detector sensitivity.

  12. PSPs and ERPs: applying the dynamics of post-synaptic potentials to individual units in simulation of temporally extended Event-Related Potential reading data.

    PubMed

    Laszlo, Sarah; Armstrong, Blair C

    2014-05-01

    The Parallel Distributed Processing (PDP) framework is built on neural-style computation, and is thus well-suited for simulating the neural implementation of cognition. However, relatively little cognitive modeling work has concerned neural measures, instead focusing on behavior. Here, we extend a PDP model of reading-related components in the Event-Related Potential (ERP) to simulation of the N400 repetition effect. We accomplish this by incorporating the dynamics of cortical post-synaptic potentials--the source of the ERP signal--into the model. Simulations demonstrate that application of these dynamics is critical for model elicitation of repetition effects in the time and frequency domains. We conclude that by advancing a neurocomputational understanding of repetition effects, we are able to posit an interpretation of their source that is both explicitly specified and mechanistically different from the well-accepted cognitive one. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Monte Carlo simulation of depth-dose distributions in TLD-100 under 90Sr-90Y irradiation.

    PubMed

    Rodríguez-Villafuerte, M; Gamboa-deBuen, I; Brandan, M E

    1997-04-01

    In this work the depth-dose distribution in TLD-100 dosimeters under beta irradiation from a 90Sr-90Y source was investigated using the Monte Carlo method. Comparisons between the simulated data and experimental results showed that the depth-dose distribution is strongly affected by the different components of both the source and dosimeter holders due to the large number of electron scattering events.

  14. Can tokamaks PFC survive a single event of any plasma instabilities?

    NASA Astrophysics Data System (ADS)

    Hassanein, A.; Sizyuk, V.; Miloshevsky, G.; Sizyuk, T.

    2013-07-01

    Plasma instability events such as disruptions, edge-localized modes (ELMs), runaway electrons (REs), and vertical displacement events (VDEs) are continued to be serious events and most limiting factors for successful tokamak reactor concept. The plasma-facing components (PFCs), e.g., wall, divertor, and limited surfaces of a tokamak as well as coolant structure materials are subjected to intense particle and heat loads and must maintain a clean and stable surface environment among them and the core/edge plasma. Typical ITER transient events parameters are used for assessing the damage from these four different instability events. HEIGHTS simulation showed that a single event of a disruption, giant ELM, VDE, or RE can cause significant surface erosion (melting and vaporization) damage to PFC, nearby components, and/or structural materials (VDE, RE) melting and possible burnout of coolant tubes that could result in shut down of reactor for extended repair time.

  15. Knowledge-based simulation for aerospace systems

    NASA Technical Reports Server (NTRS)

    Will, Ralph W.; Sliwa, Nancy E.; Harrison, F. Wallace, Jr.

    1988-01-01

    Knowledge-based techniques, which offer many features that are desirable in the simulation and development of aerospace vehicle operations, exhibit many similarities to traditional simulation packages. The eventual solution of these systems' current symbolic processing/numeric processing interface problem will lead to continuous and discrete-event simulation capabilities in a single language, such as TS-PROLOG. Qualitative, totally-symbolic simulation methods are noted to possess several intrinsic characteristics that are especially revelatory of the system being simulated, and capable of insuring that all possible behaviors are considered.

  16. A Method for Simulating Sedimentation of Fish Eggs to Generate Biological Effects Data for Assessing Dredging Impacts

    DTIC Science & Technology

    2017-03-01

    activities, as well as other causes of sedimentation (e.g., agricultural practices, storm events, tidal flows). BACKGROUND AND PROBLEM: Many naturally...effects originating from many sources (e.g., agriculture , storm event, tidal flows) on multiple aquatic species and life stages. Multiple experimental

  17. Source characterization of urban particles from meat smoking activities in Chongqing, China using single particle aerosol mass spectrometry.

    PubMed

    Chen, Yang; Wenger, John C; Yang, Fumo; Cao, Junji; Huang, Rujin; Shi, Guangming; Zhang, Shumin; Tian, Mi; Wang, Huanbo

    2017-09-01

    A Single Particle Aerosol Mass Spectrometer (SPAMS) was deployed in the urban area of Chongqing to characterize the particles present during a severe particulate pollution event that occurred in winter 2014-2015. The measurements were made at a time when residents engaged in traditional outdoor meat smoking activities to preserve meat before the Chinese Spring Festival. The measurement period was predominantly characterized by stagnant weather conditions, highly elevated levels of PM 2.5 , and low visibility. Eleven major single particle types were identified, with over 92.5% of the particles attributed to biomass burning emissions. Most of the particle types showed appreciable signs of aging in the stagnant air conditions. To simulate the meat smoking activities, a series of controlled smoldering experiments was conducted using freshly cut pine and cypress branches, both with and without wood logs. SPAMS data obtained from these experiments revealed a number of biomass burning particle types, including an elemental and organic carbon (ECOC) type that proved to be the most suitable marker for meat smoking activities. The traditional activity of making preserved meat in southwestern China is shown here to be a major source of particulate pollution. Improved measures to reduce emissions from the smoking of meat should be introduced to improve air quality in regions where smoking meat activity prevails. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Method for distributed agent-based non-expert simulation of manufacturing process behavior

    DOEpatents

    Ivezic, Nenad; Potok, Thomas E.

    2004-11-30

    A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.

  19. Exact subthreshold integration with continuous spike times in discrete-time neural network simulations.

    PubMed

    Morrison, Abigail; Straube, Sirko; Plesser, Hans Ekkehard; Diesmann, Markus

    2007-01-01

    Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques.

  20. The elementary events of Ca2+ release elicited by membrane depolarization in mammalian muscle

    PubMed Central

    Csernoch, L; Zhou, J; Stern, M D; Brum, G; Ríos, E

    2004-01-01

    Cytosolic [Ca2+] transients elicited by voltage clamp depolarization were examined by confocal line scanning of rat skeletal muscle fibres. Ca2+ sparks were observed in the fibres' membrane-permeabilized ends, but not in responses to voltage in the membrane-intact area. Elementary events of the depolarization-evoked response could be separated either at low voltages (near −50 mV) or at −20mV in partially inactivated cells. These were of lower amplitude, narrower and of much longer duration than sparks, similar to ‘lone embers’ observed in the permeabilized segments. Their average amplitude was 0.19 and spatial half-width 1.3 μm. Other parameters depended on voltage. At −50 mV average duration was 111 ms and latency 185 ms. At −20 mV duration was 203 ms and latency 24 ms. Ca2+ release current, calculated on an average of events, was nearly steady at 0.5–0.6 pA. Accordingly, simulations of the fluorescence event elicited by a subresolution source of 0.5 pA open for 100 ms had morphology similar to the experimental average. Because 0.5 pA is approximately the current measured for single RyR channels in physiological conditions, the elementary fluorescence events in rat muscle probably reflect opening of a single RyR channel. A reconstruction of cell-averaged release flux at −20 mV based on the observed distribution of latencies and calculated elementary release had qualitatively correct but slower kinetics than the release flux in prior whole-cell measurements. The qualitative agreement indicates that global Ca2+ release flux results from summation of these discrete events. The quantitative discrepancies suggest that the partial inactivation strategy may lead to events of greater duration than those occurring physiologically in fully polarized cells. PMID:14990680

  1. Synthetic earthquake catalogs simulating seismic activity in the Corinth Gulf, Greece, fault system

    NASA Astrophysics Data System (ADS)

    Console, Rodolfo; Carluccio, Roberto; Papadimitriou, Eleftheria; Karakostas, Vassilis

    2015-01-01

    The characteristic earthquake hypothesis is the basis of time-dependent modeling of earthquake recurrence on major faults. However, the characteristic earthquake hypothesis is not strongly supported by observational data. Few fault segments have long historical or paleoseismic records of individually dated ruptures, and when data and parameter uncertainties are allowed for, the form of the recurrence distribution is difficult to establish. This is the case, for instance, of the Corinth Gulf Fault System (CGFS), for which documents about strong earthquakes exist for at least 2000 years, although they can be considered complete for M ≥ 6.0 only for the latest 300 years, during which only few characteristic earthquakes are reported for individual fault segments. The use of a physics-based earthquake simulator has allowed the production of catalogs lasting 100,000 years and containing more than 500,000 events of magnitudes ≥ 4.0. The main features of our simulation algorithm are (1) an average slip rate released by earthquakes for every single segment in the investigated fault system, (2) heuristic procedures for rupture growth and stop, leading to a self-organized earthquake magnitude distribution, (3) the interaction between earthquake sources, and (4) the effect of minor earthquakes in redistributing stress. The application of our simulation algorithm to the CGFS has shown realistic features in time, space, and magnitude behavior of the seismicity. These features include long-term periodicity of strong earthquakes, short-term clustering of both strong and smaller events, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the higher-magnitude range.

  2. Timing performance of the silicon PET insert probe

    PubMed Central

    Studen, A.; Burdette, D.; Chesi, E.; Cindro, V.; Clinthorne, N. H.; Cochran, E.; Grošičar, B.; Kagan, H.; Lacasta, C.; Linhart, V.; Mikuž, M.; Stankova, V.; Weilhammer, P.; Žontar, D.

    2010-01-01

    Simulation indicates that PET image could be improved by upgrading a conventional ring with a probe placed close to the imaged object. In this paper, timing issues related to a PET probe using high-resistivity silicon as a detector material are addressed. The final probe will consist of several (four to eight) 1-mm thick layers of silicon detectors, segmented into 1 × 1 mm2 pads, each pad equivalent to an independent p + nn+ diode. A proper matching of events in silicon with events of the external ring can be achieved with a good timing resolution. To estimate the timing performance, measurements were performed on a simplified model probe, consisting of a single 1-mm thick detector with 256 square pads (1.4 mm side), coupled with two VATAGP7s, application-specific integrated circuits. The detector material and electronics are the same that will be used for the final probe. The model was exposed to 511 keV annihilation photons from an 22Na source, and a scintillator (LYSO)–PMT assembly was used as a timing reference. Results were compared with the simulation, consisting of four parts: (i) GEANT4 implemented realistic tracking of electrons excited by annihilation photon interactions in silicon, (ii) calculation of propagation of secondary ionisation (electron–hole pairs) in the sensor, (iii) estimation of the shape of the current pulse induced on surface electrodes and (iv) simulation of the first electronics stage. A very good agreement between the simulation and the measurements were found. Both indicate reliable performance of the final probe at timing windows down to 20 ns. PMID:20215445

  3. Timing performance of the silicon PET insert probe.

    PubMed

    Studen, A; Burdette, D; Chesi, E; Cindro, V; Clinthorne, N H; Cochran, E; Grosicar, B; Kagan, H; Lacasta, C; Linhart, V; Mikuz, M; Stankova, V; Weilhammer, P; Zontar, D

    2010-01-01

    Simulation indicates that PET image could be improved by upgrading a conventional ring with a probe placed close to the imaged object. In this paper, timing issues related to a PET probe using high-resistivity silicon as a detector material are addressed. The final probe will consist of several (four to eight) 1-mm thick layers of silicon detectors, segmented into 1 x 1 mm(2) pads, each pad equivalent to an independent p + nn+ diode. A proper matching of events in silicon with events of the external ring can be achieved with a good timing resolution. To estimate the timing performance, measurements were performed on a simplified model probe, consisting of a single 1-mm thick detector with 256 square pads (1.4 mm side), coupled with two VATAGP7s, application-specific integrated circuits. The detector material and electronics are the same that will be used for the final probe. The model was exposed to 511 keV annihilation photons from an (22)Na source, and a scintillator (LYSO)-PMT assembly was used as a timing reference. Results were compared with the simulation, consisting of four parts: (i) GEANT4 implemented realistic tracking of electrons excited by annihilation photon interactions in silicon, (ii) calculation of propagation of secondary ionisation (electron-hole pairs) in the sensor, (iii) estimation of the shape of the current pulse induced on surface electrodes and (iv) simulation of the first electronics stage. A very good agreement between the simulation and the measurements were found. Both indicate reliable performance of the final probe at timing windows down to 20 ns.

  4. The Chandra Source Catalog: Source Variability

    NASA Astrophysics Data System (ADS)

    Nowak, Michael; Rots, A. H.; McCollough, M. L.; Primini, F. A.; Glotfelty, K. J.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    The Chandra Source Catalog (CSC) contains fields of view that have been studied with individual, uninterrupted observations that span integration times ranging from 1 ksec to 160 ksec, and a large number of which have received (multiple) repeat observations days to years later. The CSC thus offers an unprecedented look at the variability of the X-ray sky over a broad range of time scales, and across a wide diversity of variable X-ray sources: stars in the local galactic neighborhood, galactic and extragalactic X-ray binaries, Active Galactic Nuclei, etc. Here we describe the methods used to identify and quantify source variability within a single observation, and the methods used to assess the variability of a source when detected in multiple, individual observations. Three tests are used to detect source variability within a single observation: the Kolmogorov-Smirnov test and its variant, the Kuiper test, and a Bayesian approach originally suggested by Gregory and Loredo. The latter test not only provides an indicator of variability, but is also used to create a best estimate of the variable lightcurve shape. We assess the performance of these tests via simulation of statistically stationary, variable processes with arbitrary input power spectral densities (here we concentrate on results of red noise simulations) at variety of mean count rates and fractional root mean square variabilities relevant to CSC sources. We also assess the false positive rate via simulations of constant sources whose sole source of fluctuation is Poisson noise. We compare these simulations to a preliminary assessment of the variability found in real CSC sources, and estimate the variability sensitivities of the CSC.

  5. The Chandra Source Catalog: Source Variability

    NASA Astrophysics Data System (ADS)

    Nowak, Michael; Rots, A. H.; McCollough, M. L.; Primini, F. A.; Glotfelty, K. J.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Evans, I.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    The Chandra Source Catalog (CSC) contains fields of view that have been studied with individual, uninterrupted observations that span integration times ranging from 1 ksec to 160 ksec, and a large number of which have received (multiple) repeat observations days to years later. The CSC thus offers an unprecedented look at the variability of the X-ray sky over a broad range of time scales, and across a wide diversity of variable X-ray sources: stars in the local galactic neighborhood, galactic and extragalactic X-ray binaries, Active Galactic Nuclei, etc. Here we describe the methods used to identify and quantify source variability within a single observation, and the methods used to assess the variability of a source when detected in multiple, individual observations. Three tests are used to detect source variability within a single observation: the Kolmogorov-Smirnov test and its variant, the Kuiper test, and a Bayesian approach originally suggested by Gregory and Loredo. The latter test not only provides an indicator of variability, but is also used to create a best estimate of the variable lightcurve shape. We assess the performance of these tests via simulation of statistically stationary, variable processes with arbitrary input power spectral densities (here we concentrate on results of red noise simulations) at variety of mean count rates and fractional root mean square variabilities relevant to CSC sources. We also assess the false positive rate via simulations of constant sources whose sole source of fluctuation is Poisson noise. We compare these simulations to an assessment of the variability found in real CSC sources, and estimate the variability sensitivities of the CSC.

  6. Characterizing single isolated radiation-damage events from molecular dynamics via virtual diffraction methods

    DOE PAGES

    Stewart, James A.; Brookman, G.; Price, Patrick Michael; ...

    2018-04-25

    In this study, the evolution and characterization of single-isolated-ion-strikes are investigated by combining atomistic simulations with selected-area electron diffraction (SAED) patterns generated from these simulations. Five molecular dynamics simulations are performed for a single 20 keV primary knock-on atom in bulk crystalline Si. The resulting cascade damage is characterized in two complementary ways. First, the individual cascade events are conventionally quantified through the evolution of the number of defects and the atomic (volumetric) strain associated with these defect structures. These results show that (i) the radiation damage produced is consistent with the Norgett, Robinson, and Torrens model of damage productionmore » and (ii) there is a net positive volumetric strain associated with the cascade structures. Second, virtual SAED patterns are generated for the resulting cascade-damaged structures along several zone axes. The analysis of the corresponding diffraction patterns shows the SAED spots approximately doubling in size, on average, due to broadening induced by the defect structures. Furthermore, the SAED spots are observed to exhibit an average radial outward shift between 0.33% and 0.87% depending on the zone axis. Finally, this characterization approach, as utilized here, is a preliminary investigation in developing methodologies and opportunities to link experimental observations with atomistic simulations to elucidate microstructural damage states.« less

  7. Characterizing single isolated radiation-damage events from molecular dynamics via virtual diffraction methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, James A.; Brookman, G.; Price, Patrick Michael

    In this study, the evolution and characterization of single-isolated-ion-strikes are investigated by combining atomistic simulations with selected-area electron diffraction (SAED) patterns generated from these simulations. Five molecular dynamics simulations are performed for a single 20 keV primary knock-on atom in bulk crystalline Si. The resulting cascade damage is characterized in two complementary ways. First, the individual cascade events are conventionally quantified through the evolution of the number of defects and the atomic (volumetric) strain associated with these defect structures. These results show that (i) the radiation damage produced is consistent with the Norgett, Robinson, and Torrens model of damage productionmore » and (ii) there is a net positive volumetric strain associated with the cascade structures. Second, virtual SAED patterns are generated for the resulting cascade-damaged structures along several zone axes. The analysis of the corresponding diffraction patterns shows the SAED spots approximately doubling in size, on average, due to broadening induced by the defect structures. Furthermore, the SAED spots are observed to exhibit an average radial outward shift between 0.33% and 0.87% depending on the zone axis. Finally, this characterization approach, as utilized here, is a preliminary investigation in developing methodologies and opportunities to link experimental observations with atomistic simulations to elucidate microstructural damage states.« less

  8. Characterizing single isolated radiation-damage events from molecular dynamics via virtual diffraction methods

    NASA Astrophysics Data System (ADS)

    Stewart, J. A.; Brookman, G.; Price, P.; Franco, M.; Ji, W.; Hattar, K.; Dingreville, R.

    2018-04-01

    The evolution and characterization of single-isolated-ion-strikes are investigated by combining atomistic simulations with selected-area electron diffraction (SAED) patterns generated from these simulations. Five molecular dynamics simulations are performed for a single 20 keV primary knock-on atom in bulk crystalline Si. The resulting cascade damage is characterized in two complementary ways. First, the individual cascade events are conventionally quantified through the evolution of the number of defects and the atomic (volumetric) strain associated with these defect structures. These results show that (i) the radiation damage produced is consistent with the Norgett, Robinson, and Torrens model of damage production and (ii) there is a net positive volumetric strain associated with the cascade structures. Second, virtual SAED patterns are generated for the resulting cascade-damaged structures along several zone axes. The analysis of the corresponding diffraction patterns shows the SAED spots approximately doubling in size, on average, due to broadening induced by the defect structures. Furthermore, the SAED spots are observed to exhibit an average radial outward shift between 0.33% and 0.87% depending on the zone axis. This characterization approach, as utilized here, is a preliminary investigation in developing methodologies and opportunities to link experimental observations with atomistic simulations to elucidate microstructural damage states.

  9. Azimuthal Dependence of the Ground Motion Variability from Scenario Modeling of the 2014 Mw6.0 South Napa, California, Earthquake Using an Advanced Kinematic Source Model

    NASA Astrophysics Data System (ADS)

    Gallovič, F.

    2017-09-01

    Strong ground motion simulations require physically plausible earthquake source model. Here, I present the application of such a kinematic model introduced originally by Ruiz et al. (Geophys J Int 186:226-244, 2011). The model is constructed to inherently provide synthetics with the desired omega-squared spectral decay in the full frequency range. The source is composed of randomly distributed overlapping subsources with fractal number-size distribution. The position of the subsources can be constrained by prior knowledge of major asperities (stemming, e.g., from slip inversions), or can be completely random. From earthquake physics point of view, the model includes positive correlation between slip and rise time as found in dynamic source simulations. Rupture velocity and rise time follows local S-wave velocity profile, so that the rupture slows down and rise times increase close to the surface, avoiding unrealistically strong ground motions. Rupture velocity can also have random variations, which result in irregular rupture front while satisfying the causality principle. This advanced kinematic broadband source model is freely available and can be easily incorporated into any numerical wave propagation code, as the source is described by spatially distributed slip rate functions, not requiring any stochastic Green's functions. The source model has been previously validated against the observed data due to the very shallow unilateral 2014 Mw6 South Napa, California, earthquake; the model reproduces well the observed data including the near-fault directivity (Seism Res Lett 87:2-14, 2016). The performance of the source model is shown here on the scenario simulations for the same event. In particular, synthetics are compared with existing ground motion prediction equations (GMPEs), emphasizing the azimuthal dependence of the between-event ground motion variability. I propose a simple model reproducing the azimuthal variations of the between-event ground motion variability, providing an insight into possible refinement of GMPEs' functional forms.

  10. Binaural room simulation

    NASA Technical Reports Server (NTRS)

    Lehnert, H.; Blauert, Jens; Pompetzki, W.

    1991-01-01

    In every-day listening the auditory event perceived by a listener is determined not only by the sound signal that a sound emits but also by a variety of environmental parameters. These parameters are the position, orientation and directional characteristics of the sound source, the listener's position and orientation, the geometrical and acoustical properties of surfaces which affect the sound field and the sound propagation properties of the surrounding fluid. A complete set of these parameters can be called an Acoustic Environment. If the auditory event perceived by a listener is manipulated in such a way that the listener is shifted acoustically into a different acoustic environment without moving himself physically, a Virtual Acoustic Environment has been created. Here, we deal with a special technique to set up nearly arbitrary Virtual Acoustic Environments, the Binaural Room Simulation. The purpose of the Binaural Room Simulation is to compute the binaural impulse response related to a virtual acoustic environment taking into account all parameters mentioned above. One possible way to describe a Virtual Acoustic Environment is the concept of the virtual sound sources. Each of the virtual sources emits a certain signal which is correlated but not necessarily identical with the signal emitted by the direct sound source. If source and receiver are non moving, the acoustic environment becomes a linear time-invariant system. Then, the Binaural Impulse Response from the source to a listener' s eardrums contains all relevant auditory information related to the Virtual Acoustic Environment. Listening into the simulated environment can easily be achieved by convolving the Binaural Impulse Response with dry signals and representing the results via headphones.

  11. Anomalous single-electron transfer in common-gate quadruple-dot single-electron devices with asymmetric junction capacitances

    NASA Astrophysics Data System (ADS)

    Imai, Shigeru; Ito, Masato

    2018-06-01

    In this paper, anomalous single-electron transfer in common-gate quadruple-dot turnstile devices with asymmetric junction capacitances is revealed. That is, the islands have the same total number of excess electrons at high and low gate voltages of the swing that transfers a single electron. In another situation, two electrons enter the islands from the source and two electrons leave the islands for the source and drain during a gate voltage swing cycle. First, stability diagrams of the turnstile devices are presented. Then, sequences of single-electron tunneling events by gate voltage swings are investigated, which demonstrate the above-mentioned anomalous single-electron transfer between the source and the drain. The anomalous single-electron transfer can be understood by regarding the four islands as “three virtual islands and a virtual source or drain electrode of a virtual triple-dot device”. The anomalous behaviors of the four islands are explained by the normal behavior of the virtual islands transferring a single electron and the behavior of the virtual electrode.

  12. Localizing gravitational wave sources with single-baseline atom interferometers

    DOE PAGES

    Graham, Peter W.; Jung, Sunghoon

    2018-01-31

    Localizing sources on the sky is crucial for realizing the full potential of gravitational waves for astronomy, astrophysics, and cosmology. Here in this paper, we show that the midfrequency band, roughly 0.03 to 10 Hz, has significant potential for angular localization. The angular location is measured through the changing Doppler shift as the detector orbits the Sun. This band maximizes the effect since these are the highest frequencies in which sources live for several months. Atom interferometer detectors can observe in the midfrequency band, and even with just a single baseline they can exploit this effect for sensitive angular localization.more » The single-baseline orbits around the Earth and the Sun, causing it to reorient and change position significantly during the lifetime of the source, and making it similar to having multiple baselines/detectors. For example, atomic detectors could predict the location of upcoming black hole or neutron star merger events with sufficient accuracy to allow optical and other electromagnetic telescopes to observe these events simultaneously. Thus, midband atomic detectors are complementary to other gravitational wave detectors and will help complete the observation of a broad range of the gravitational spectrum.« less

  13. Localizing gravitational wave sources with single-baseline atom interferometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, Peter W.; Jung, Sunghoon

    Localizing sources on the sky is crucial for realizing the full potential of gravitational waves for astronomy, astrophysics, and cosmology. Here in this paper, we show that the midfrequency band, roughly 0.03 to 10 Hz, has significant potential for angular localization. The angular location is measured through the changing Doppler shift as the detector orbits the Sun. This band maximizes the effect since these are the highest frequencies in which sources live for several months. Atom interferometer detectors can observe in the midfrequency band, and even with just a single baseline they can exploit this effect for sensitive angular localization.more » The single-baseline orbits around the Earth and the Sun, causing it to reorient and change position significantly during the lifetime of the source, and making it similar to having multiple baselines/detectors. For example, atomic detectors could predict the location of upcoming black hole or neutron star merger events with sufficient accuracy to allow optical and other electromagnetic telescopes to observe these events simultaneously. Thus, midband atomic detectors are complementary to other gravitational wave detectors and will help complete the observation of a broad range of the gravitational spectrum.« less

  14. Multiple Component Event-Related Potential (mcERP) Estimation

    NASA Technical Reports Server (NTRS)

    Knuth, K. H.; Clanton, S. T.; Shah, A. S.; Truccolo, W. A.; Ding, M.; Bressler, S. L.; Trejo, L. J.; Schroeder, C. E.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We show how model-based estimation of the neural sources responsible for transient neuroelectric signals can be improved by the analysis of single trial data. Previously, we showed that a multiple component event-related potential (mcERP) algorithm can extract the responses of individual sources from recordings of a mixture of multiple, possibly interacting, neural ensembles. McERP also estimated single-trial amplitudes and onset latencies, thus allowing more accurate estimation of ongoing neural activity during an experimental trial. The mcERP algorithm is related to informax independent component analysis (ICA); however, the underlying signal model is more physiologically realistic in that a component is modeled as a stereotypic waveshape varying both in amplitude and onset latency from trial to trial. The result is a model that reflects quantities of interest to the neuroscientist. Here we demonstrate that the mcERP algorithm provides more accurate results than more traditional methods such as factor analysis and the more recent ICA. Whereas factor analysis assumes the sources are orthogonal and ICA assumes the sources are statistically independent, the mcERP algorithm makes no such assumptions thus allowing investigators to examine interactions among components by estimating the properties of single-trial responses.

  15. An Independent Assessment of Anthropogenic Attribution Statements for Recent Extreme Temperature and Rainfall Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angélil, Oliver; Stone, Dáithí; Wehner, Michael

    The annual "State of the Climate" report, published in the Bulletin of the American Meteorological Society (BAMS), has included a supplement since 2011 composed of brief analyses of the human influence on recent major extreme weather events. There are now several dozen extreme weather events examined in these supplements, but these studies have all differed in their data sources as well as their approaches to defining the events, analyzing the events, and the consideration of the role of anthropogenic emissions. This study reexamines most of these events using a single analytical approach and a single set of climate model andmore » observational data sources. In response to recent studies recommending the importance of using multiple methods for extreme weather event attribution, results are compared from these analyses to those reported in the BAMS supplements collectively, with the aim of characterizing the degree to which the lack of a common methodological framework may or may not influence overall conclusions. Results are broadly similar to those reported earlier for extreme temperature events but disagree for a number of extreme precipitation events. Based on this, it is advised that the lack of comprehensive uncertainty analysis in recent extreme weather attribution studies is important and should be considered when interpreting results, but as yet it has not introduced a systematic bias across these studies.« less

  16. An Independent Assessment of Anthropogenic Attribution Statements for Recent Extreme Temperature and Rainfall Events

    DOE PAGES

    Angélil, Oliver; Stone, Dáithí; Wehner, Michael; ...

    2016-12-16

    The annual "State of the Climate" report, published in the Bulletin of the American Meteorological Society (BAMS), has included a supplement since 2011 composed of brief analyses of the human influence on recent major extreme weather events. There are now several dozen extreme weather events examined in these supplements, but these studies have all differed in their data sources as well as their approaches to defining the events, analyzing the events, and the consideration of the role of anthropogenic emissions. This study reexamines most of these events using a single analytical approach and a single set of climate model andmore » observational data sources. In response to recent studies recommending the importance of using multiple methods for extreme weather event attribution, results are compared from these analyses to those reported in the BAMS supplements collectively, with the aim of characterizing the degree to which the lack of a common methodological framework may or may not influence overall conclusions. Results are broadly similar to those reported earlier for extreme temperature events but disagree for a number of extreme precipitation events. Based on this, it is advised that the lack of comprehensive uncertainty analysis in recent extreme weather attribution studies is important and should be considered when interpreting results, but as yet it has not introduced a systematic bias across these studies.« less

  17. Investigation of the Carbon Arc Source as an AM0 Solar Simulator for Use in Characterizing Multi-Junction Solar Cells

    NASA Technical Reports Server (NTRS)

    Xu, Jianzeng; Woodyward, James R.

    2005-01-01

    The operation of multi-junction solar cells used for production of space power is critically dependent on the spectral irradiance of the illuminating light source. Unlike single-junction cells where the spectral irradiance of the simulator and computational techniques may be used to optimized cell designs, optimization of multi-junction solar cell designs requires a solar simulator with a spectral irradiance that closely matches AM0.

  18. Simulating the Evolving Behavior of Secondary Slow Slip Fronts

    NASA Astrophysics Data System (ADS)

    Peng, Y.; Rubin, A. M.

    2017-12-01

    High-resolution tremor catalogs of slow slip events reveal secondary slow slip fronts behind the main front that repetitively occupy the same source area during a single episode. These repetitive fronts are most often observed in regions with high tremor density. Their recurrence intervals gradually increase from being too short to be tidally modulated (tens of minutes) to being close to tidal periods (about 12 or 24 hours). This could be explained by a decreasing loading rate from creep in the surrounding regions (with few or no observable tremor events) as the main front passes by. As the recurrence intervals of the fronts increase, eventually they lock in on the tidal periods. We attempt to simulate this numerically using a rate-and-state friction law that transitions from velocity-weakening at low slip speeds to velocity strengthening at high slip speeds. Many small circular patches with a cutoff velocity an order of magnitude higher than that of the background are randomly placed on the fault, in order to simulate the average properties of the high-density tremor zone. Preliminary results show that given reasonable parameters, this model produces similar propagation speeds of the forward-migrating main front inside and outside the high-density tremor zone, consistent with observations. We will explore the behavior of the secondary fronts that arise in this model, in relation to the local density of the small tremor-analog patches, the overall geometry of the tremor zone and the tides.

  19. Analysis of the French insurance market exposure to floods: a stochastic model combining river overflow and surface runoff

    NASA Astrophysics Data System (ADS)

    Moncoulon, D.; Labat, D.; Ardon, J.; Onfroy, T.; Leblois, E.; Poulard, C.; Aji, S.; Rémy, A.; Quantin, A.

    2013-07-01

    The analysis of flood exposure at a national scale for the French insurance market must combine the generation of a probabilistic event set of all possible but not yet occurred flood situations with hazard and damage modeling. In this study, hazard and damage models are calibrated on a 1995-2012 historical event set, both for hazard results (river flow, flooded areas) and loss estimations. Thus, uncertainties in the deterministic estimation of a single event loss are known before simulating a probabilistic event set. To take into account at least 90% of the insured flood losses, the probabilistic event set must combine the river overflow (small and large catchments) with the surface runoff due to heavy rainfall, on the slopes of the watershed. Indeed, internal studies of CCR claim database has shown that approximately 45% of the insured flood losses are located inside the floodplains and 45% outside. 10% other percent are due to seasurge floods and groundwater rise. In this approach, two independent probabilistic methods are combined to create a single flood loss distribution: generation of fictive river flows based on the historical records of the river gauge network and generation of fictive rain fields on small catchments, calibrated on the 1958-2010 Météo-France rain database SAFRAN. All the events in the probabilistic event sets are simulated with the deterministic model. This hazard and damage distribution is used to simulate the flood losses at the national scale for an insurance company (MACIF) and to generate flood areas associated with hazard return periods. The flood maps concern river overflow and surface water runoff. Validation of these maps is conducted by comparison with the address located claim data on a small catchment (downstream Argens).

  20. Network hydraulics inclusion in water quality event detection using multiple sensor stations data.

    PubMed

    Oliker, Nurit; Ostfeld, Avi

    2015-09-01

    Event detection is one of the current most challenging topics in water distribution systems analysis: how regular on-line hydraulic (e.g., pressure, flow) and water quality (e.g., pH, residual chlorine, turbidity) measurements at different network locations can be efficiently utilized to detect water quality contamination events. This study describes an integrated event detection model which combines multiple sensor stations data with network hydraulics. To date event detection modelling is likely limited to single sensor station location and dataset. Single sensor station models are detached from network hydraulics insights and as a result might be significantly exposed to false positive alarms. This work is aimed at decreasing this limitation through integrating local and spatial hydraulic data understanding into an event detection model. The spatial analysis complements the local event detection effort through discovering events with lower signatures by exploring the sensors mutual hydraulic influences. The unique contribution of this study is in incorporating hydraulic simulation information into the overall event detection process of spatially distributed sensors. The methodology is demonstrated on two example applications using base runs and sensitivity analyses. Results show a clear advantage of the suggested model over single-sensor event detection schemes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. SU-E-T-314: The Application of Cloud Computing in Pencil Beam Scanning Proton Therapy Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Z; Gao, M

    Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster softwaremore » developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.« less

  2. Two Wrongs Make a Right: Addressing Underreporting in Binary Data from Multiple Sources.

    PubMed

    Cook, Scott J; Blas, Betsabe; Carroll, Raymond J; Sinha, Samiran

    2017-04-01

    Media-based event data-i.e., data comprised from reporting by media outlets-are widely used in political science research. However, events of interest (e.g., strikes, protests, conflict) are often underreported by these primary and secondary sources, producing incomplete data that risks inconsistency and bias in subsequent analysis. While general strategies exist to help ameliorate this bias, these methods do not make full use of the information often available to researchers. Specifically, much of the event data used in the social sciences is drawn from multiple, overlapping news sources (e.g., Agence France-Presse, Reuters). Therefore, we propose a novel maximum likelihood estimator that corrects for misclassification in data arising from multiple sources. In the most general formulation of our estimator, researchers can specify separate sets of predictors for the true-event model and each of the misclassification models characterizing whether a source fails to report on an event. As such, researchers are able to accurately test theories on both the causes of and reporting on an event of interest. Simulations evidence that our technique regularly out performs current strategies that either neglect misclassification, the unique features of the data-generating process, or both. We also illustrate the utility of this method with a model of repression using the Social Conflict in Africa Database.

  3. On the performance of voltage stepping for the simulation of adaptive, nonlinear integrate-and-fire neuronal networks.

    PubMed

    Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique

    2011-05-01

    In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.

  4. Thermomechanical Stresses Analysis of a Single Event Burnout Process

    NASA Astrophysics Data System (ADS)

    Tais, Carlos E.; Romero, Eduardo; Demarco, Gustavo L.

    2009-06-01

    This work analyzes the thermal and mechanical effects arising in a power Diffusion Metal Oxide Semiconductor (DMOS) during a Single Event Burnout (SEB) process. For studying these effects we propose a more detailed simulation structure than the previously used by other authors, solving the mathematical models by means of the Finite Element Method. We use a cylindrical heat generation region, with 5 W, 10 W, 50 W and 100 W for emulating the thermal phenomena occurring during SEB processes, avoiding the complexity of the mathematical treatment of the ion-semiconductor interaction.

  5. Single Event Upset Rate Estimates for a 16-K CMOS (Complementary Metal Oxide Semiconductor) SRAM (Static Random Access Memory).

    DTIC Science & Technology

    1986-09-30

    4 . ~**..ft.. ft . - - - ft SI TABLES 9 I. SA32~40 Single Event Upset Test, 1140-MeV Krypton, 9/l8/8~4. . .. .. .. .. .. .16 II. CRUP Simulation...cosmic ray interaction analysis described in the remainder of this report were calculated using the CRUP computer code 3 modified for funneling. The... CRUP code requires, as inputs, the size of a depletion region specified as a retangular parallel piped with dimensions a 9 b S c, the effective funnel

  6. Numerical Simulations of the 1991 Limón Tsunami, Costa Rica Caribbean Coast

    NASA Astrophysics Data System (ADS)

    Chacón-Barrantes, Silvia; Zamora, Natalia

    2017-08-01

    The second largest recorded tsunami along the Caribbean margin of Central America occurred 25 years ago. On April 22nd, 1991, an earthquake with magnitude Mw 7.6 ruptured along the thrust faults that form the North Panamá Deformed Belt (NPDB). The earthquake triggered a tsunami that affected the Caribbean coast of Costa Rica and Panamá within few minutes, generating two casualties. These are the only deaths caused by a tsunami in Costa Rica. Coseismic uplift up to 1.6 m and runup values larger than 2 m were measured along some coastal sites. Here, we consider three solutions for the seismic source as initial conditions to model the tsunami, each considering a single rupture plane. We performed numerical modeling of the tsunami propagation and runup using NEOWAVE numerical model (Yamazaki et al. in Int J Numer Methods Fluids 67:2081-2107, 2010, doi: 10.1002/fld.2485 ) on a system of nested grids from the entire Caribbean Sea to Limón city. The modeled surface deformation and tsunami runup agreed with the measured data along most of the coastal sites with one preferred model that fits the field data. The model results are useful to determine how the 1991 tsunami could have affected regions where tsunami records were not preserved and to simulate the effects of the coastal surface deformations as buffer to tsunami. We also performed tsunami modeling to simulate the consequences if a similar event with larger magnitude Mw 7.9 occurs offshore the southern Costa Rican Caribbean coast. Such event would generate maximum wave heights of more than 5 m showing that Limón and northwestern Panamá coastal areas are exposed to moderate-to-large tsunamis. These simulations considering historical events and maximum credible scenarios can be useful for hazard assessment and also as part of studies leading to tsunami evacuation maps and mitigation plans, even when that is not the scope of this paper.

  7. Boson Sampling with Single-Photon Fock States from a Bright Solid-State Source.

    PubMed

    Loredo, J C; Broome, M A; Hilaire, P; Gazzano, O; Sagnes, I; Lemaitre, A; Almeida, M P; Senellart, P; White, A G

    2017-03-31

    A boson-sampling device is a quantum machine expected to perform tasks intractable for a classical computer, yet requiring minimal nonclassical resources as compared to full-scale quantum computers. Photonic implementations to date employed sources based on inefficient processes that only simulate heralded single-photon statistics when strongly reducing emission probabilities. Boson sampling with only single-photon input has thus never been realized. Here, we report on a boson-sampling device operated with a bright solid-state source of single-photon Fock states with high photon-number purity: the emission from an efficient and deterministic quantum dot-micropillar system is demultiplexed into three partially indistinguishable single photons, with a single-photon purity 1-g^{(2)}(0) of 0.990±0.001, interfering in a linear optics network. Our demultiplexed source is between 1 and 2 orders of magnitude more efficient than current heralded multiphoton sources based on spontaneous parametric down-conversion, allowing us to complete the boson-sampling experiment faster than previous equivalent implementations.

  8. Coordinated single-phase control scheme for voltage unbalance reduction in low voltage network.

    PubMed

    Pullaguram, Deepak; Mishra, Sukumar; Senroy, Nilanjan

    2017-08-13

    Low voltage (LV) distribution systems are typically unbalanced in nature due to unbalanced loading and unsymmetrical line configuration. This situation is further aggravated by single-phase power injections. A coordinated control scheme is proposed for single-phase sources, to reduce voltage unbalance. A consensus-based coordination is achieved using a multi-agent system, where each agent estimates the averaged global voltage and current magnitudes of individual phases in the LV network. These estimated values are used to modify the reference power of individual single-phase sources, to ensure system-wide balanced voltages and proper power sharing among sources connected to the same phase. Further, the high X / R ratio of the filter, used in the inverter of the single-phase source, enables control of reactive power, to minimize voltage unbalance locally. The proposed scheme is validated by simulating a LV distribution network with multiple single-phase sources subjected to various perturbations.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).

  9. Implementation of warm-cloud processes in a source-oriented WRF/Chem model to study the effect of aerosol mixing state on fog formation in the Central Valley of California

    NASA Astrophysics Data System (ADS)

    Lee, Hsiang-He; Chen, Shu-Hua; Kleeman, Michael J.; Zhang, Hongliang; DeNero, Steven P.; Joe, David K.

    2016-07-01

    The source-oriented Weather Research and Forecasting chemistry model (SOWC) was modified to include warm cloud processes and was applied to investigate how aerosol mixing states influence fog formation and optical properties in the atmosphere. SOWC tracks a 6-D chemical variable (X, Z, Y, size bins, source types, species) through an explicit simulation of atmospheric chemistry and physics. A source-oriented cloud condensation nuclei module was implemented into the SOWC model to simulate warm clouds using the modified two-moment Purdue Lin microphysics scheme. The Goddard shortwave and long-wave radiation schemes were modified to interact with source-oriented aerosols and cloud droplets so that aerosol direct and indirect effects could be studied. The enhanced SOWC model was applied to study a fog event that occurred on 17 January 2011, in the Central Valley of California. Tule fog occurred because an atmospheric river effectively advected high moisture into the Central Valley and nighttime drainage flow brought cold air from mountains into the valley. The SOWC model produced reasonable liquid water path, spatial distribution and duration of fog events. The inclusion of aerosol-radiation interaction only slightly modified simulation results since cloud optical thickness dominated the radiation budget in fog events. The source-oriented mixture representation of particles reduced cloud droplet number relative to the internal mixture approach that artificially coats hydrophobic particles with hygroscopic components. The fraction of aerosols activating into cloud condensation nuclei (CCN) at a supersaturation of 0.5 % in the Central Valley decreased from 94 % in the internal mixture model to 80 % in the source-oriented model. This increased surface energy flux by 3-5 W m-2 and surface temperature by as much as 0.25 K in the daytime.

  10. Sequential combination of multi-source satellite observations for separation of surface deformation associated with serial seismic events

    NASA Astrophysics Data System (ADS)

    Chen, Qiang; Xu, Qian; Zhang, Yijun; Yang, Yinghui; Yong, Qi; Liu, Guoxiang; Liu, Xianwen

    2018-03-01

    Single satellite geodetic technique has weakness for mapping sequence of ground deformation associated with serial seismic events, like InSAR with long revisiting period readily leading to mixed complex deformation signals from multiple events. It challenges the observation capability of single satellite geodetic technique for accurate recognition of individual surface deformation and earthquake model. The rapidly increasing availability of various satellite observations provides good solution for overcoming the issue. In this study, we explore a sequential combination of multiple overlapping datasets from ALOS/PALSAR, ENVISAT/ASAR and GPS observations to separate surface deformation associated with the 2011 Mw 9.0 Tohoku-Oki major quake and two strong aftershocks including the Mw 6.6 Iwaki and Mw 5.8 Ibaraki events. We first estimate the fault slip model of major shock with ASAR interferometry and GPS displacements as constraints. Due to the used PALSAR interferogram spanning the period of all the events, we then remove the surface deformation of major shock through forward calculated prediction thus obtaining PALSAR InSAR deformation associated with the two strong aftershocks. The inversion for source parameters of Iwaki aftershock is conducted using the refined PALSAR deformation considering that the higher magnitude Iwaki quake has dominant deformation contribution than the Ibaraki event. After removal of deformation component of Iwaki event, we determine the fault slip distribution of Ibaraki shock using the remained PALSAR InSAR deformation. Finally, the complete source models for the serial seismic events are clearly identified from the sequential combination of multi-source satellite observations, which suggest that the major quake is a predominant mega-thrust rupture, whereas the two aftershocks are normal faulting motion. The estimated seismic moment magnitude for the Tohoku-Oki, Iwaki and Ibaraki evens are Mw 9.0, Mw 6.85 and Mw 6.11, respectively.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henderson, C. B.; Gould, A.; Gaudi, B. S.

    The mass of the lenses giving rise to Galactic microlensing events can be constrained by measuring the relative lens-source proper motion and lens flux. The flux of the lens can be separated from that of the source, companions to the source, and unrelated nearby stars with high-resolution images taken when the lens and source are spatially resolved. For typical ground-based adaptive optics (AO) or space-based observations, this requires either inordinately long time baselines or high relative proper motions. We provide a list of microlensing events toward the Galactic bulge with high relative lens-source proper motion that are therefore good candidatesmore » for constraining the lens mass with future high-resolution imaging. We investigate all events from 2004 to 2013 that display detectable finite-source effects, a feature that allows us to measure the proper motion. In total, we present 20 events with μ ≳ 8 mas yr{sup –1}. Of these, 14 were culled from previous analyses while 6 are new, including OGLE-2004-BLG-368, MOA-2005-BLG-36, OGLE-2012-BLG-0211, OGLE-2012-BLG-0456, MOA-2012-BLG-532, and MOA-2013-BLG-029. In ≲12 yr from the time of each event the lens and source of each event will be sufficiently separated for ground-based telescopes with AO systems or space telescopes to resolve each component and further characterize the lens system. Furthermore, for the most recent events, comparison of the lens flux estimates from images taken immediately to those estimated from images taken when the lens and source are resolved can be used to empirically check the robustness of the single-epoch method currently being used to estimate lens masses for many events.« less

  12. Source mechanism of long-period events at Kusatsu-Shirane Volcano, Japan, inferred from waveform inversion of the effective excitation functions

    USGS Publications Warehouse

    Nakano, M.; Kumagai, H.; Chouet, B.A.

    2003-01-01

    We investigate the source mechanism of long-period (LP) events observed at Kusatsu-Shirane Volcano, Japan, based on waveform inversions of their effective excitation functions. The effective excitation function, which represents the apparent excitation observed at individual receivers, is estimated by applying an autoregressive filter to the LP waveform. Assuming a point source, we apply this method to seven LP events the waveforms of which are characterized by simple decaying and nearly monochromatic oscillations with frequency in the range 1-3 Hz. The results of the waveform inversions show dominant volumetric change components accompanied by single force components, common to all the events analyzed, and suggesting a repeated activation of a sub-horizontal crack located 300 m beneath the summit crater lakes. Based on these results, we propose a model of the source process of LP seismicity, in which a gradual buildup of steam pressure in a hydrothermal crack in response to magmatic heat causes repeated discharges of steam from the crack. The rapid discharge of fluid causes the collapse of the fluid-filled crack and excites acoustic oscillations of the crack, which produce the characteristic waveforms observed in the LP events. The presence of a single force synchronous with the collapse of the crack is interpreted as the release of gravitational energy that occurs as the slug of steam ejected from the crack ascends toward the surface and is replaced by cooler water flowing downward in a fluid-filled conduit linking the crack and the base of the crater lake. ?? 2003 Elsevier Science B.V. All rights reserved.

  13. Large scale meteorological patterns and moisture sources during precipitation extremes over South Asia

    NASA Astrophysics Data System (ADS)

    Mehmood, S.; Ashfaq, M.; Evans, K. J.; Black, R. X.; Hsu, H. H.

    2017-12-01

    Extreme precipitation during summer season has shown an increasing trend across South Asia in recent decades, causing an exponential increase in weather related losses. Here we combine a cluster analyses technique (Agglomerative Hierarchical Clustering) with a Lagrangian based moisture analyses technique to investigate potential commonalities in the characteristics of the large scale meteorological patterns (LSMP) and moisture anomalies associated with the observed extreme precipitation events, and their representation in the Department of Energy model ACME. Using precipitation observations from the Indian Meteorological Department (IMD) and Asian Precipitation Highly Resolved Observational Data Integration Towards Evaluation (APHRODITE), and atmospheric variables from Era-Interim Reanalysis, we first identify LSMP both in upper and lower troposphere that are responsible for wide spread precipitation extreme events during 1980-2015 period. For each of the selected extreme event, we perform moisture source analyses to identify major evaporative sources that sustain anomalous moisture supply during the course of the event, with a particular focus on local terrestrial moisture recycling. Further, we perform similar analyses on two sets of five-member ensemble of ACME model (1-degree and ¼ degree) to investigate the ability of ACME model in simulating precipitation extremes associated with each of the LSMP patterns and associated anomalous moisture sourcing from each of the terrestrial and oceanic evaporative region. Comparison of low and high-resolution model configurations provides insight about the influence of horizontal grid spacing in the simulation of extreme precipitation and the governing mechanisms.

  14. Impact of event positioning algorithm on performance of a whole-body PET scanner using one-to-one coupled detectors

    NASA Astrophysics Data System (ADS)

    Surti, S.; Karp, J. S.

    2018-03-01

    The advent of silicon photomultipliers (SiPMs) has introduced the possibility of increased detector performance in commercial whole-body PET scanners. The primary advantage of these photodetectors is the ability to couple a single SiPM channel directly to a single pixel of PET scintillator that is typically 4 mm wide (one-to-one coupled detector design). We performed simulation studies to evaluate the impact of three different event positioning algorithms in such detectors: (i) a weighted energy centroid positioning (Anger logic), (ii) identifying the crystal with maximum energy deposition (1st max crystal), and (iii) identifying the crystal with the second highest energy deposition (2nd max crystal). Detector simulations performed with LSO crystals indicate reduced positioning errors when using the 2nd max crystal positioning algorithm. These studies are performed over a range of crystal cross-sections varying from 1  ×  1 mm2 to 4  ×  4 mm2 as well as crystal thickness of 1 cm to 3 cm. System simulations were performed for a whole-body PET scanner (85 cm ring diameter) with a long axial FOV (70 cm long) and show an improvement in reconstructed spatial resolution for a point source when using the 2nd max crystal positioning algorithm. Finally, we observe a 30-40% gain in contrast recovery coefficient values for 1 and 0.5 cm diameter spheres when using the 2nd max crystal positioning algorithm compared to the 1st max crystal positioning algorithm. These results show that there is an advantage to implementing the 2nd max crystal positioning algorithm in a new generation of PET scanners using one-to-one coupled detector design with lutetium based crystals, including LSO, LYSO or scintillators that have similar density and effective atomic number as LSO.

  15. DeMO: An Ontology for Discrete-event Modeling and Simulation.

    PubMed

    Silver, Gregory A; Miller, John A; Hybinette, Maria; Baramidze, Gregory; York, William S

    2011-09-01

    Several fields have created ontologies for their subdomains. For example, the biological sciences have developed extensive ontologies such as the Gene Ontology, which is considered a great success. Ontologies could provide similar advantages to the Modeling and Simulation community. They provide a way to establish common vocabularies and capture knowledge about a particular domain with community-wide agreement. Ontologies can support significantly improved (semantic) search and browsing, integration of heterogeneous information sources, and improved knowledge discovery capabilities. This paper discusses the design and development of an ontology for Modeling and Simulation called the Discrete-event Modeling Ontology (DeMO), and it presents prototype applications that demonstrate various uses and benefits that such an ontology may provide to the Modeling and Simulation community.

  16. DeMO: An Ontology for Discrete-event Modeling and Simulation

    PubMed Central

    Silver, Gregory A; Miller, John A; Hybinette, Maria; Baramidze, Gregory; York, William S

    2011-01-01

    Several fields have created ontologies for their subdomains. For example, the biological sciences have developed extensive ontologies such as the Gene Ontology, which is considered a great success. Ontologies could provide similar advantages to the Modeling and Simulation community. They provide a way to establish common vocabularies and capture knowledge about a particular domain with community-wide agreement. Ontologies can support significantly improved (semantic) search and browsing, integration of heterogeneous information sources, and improved knowledge discovery capabilities. This paper discusses the design and development of an ontology for Modeling and Simulation called the Discrete-event Modeling Ontology (DeMO), and it presents prototype applications that demonstrate various uses and benefits that such an ontology may provide to the Modeling and Simulation community. PMID:22919114

  17. Single event mass spectrometry

    DOEpatents

    Conzemius, Robert J.

    1990-01-16

    A means and method for single event time of flight mass spectrometry for analysis of specimen materials. The method of the invention includes pulsing an ion source imposing at least one pulsed ion onto the specimen to produce a corresponding emission of at least one electrically charged particle. The emitted particle is then dissociated into a charged ion component and an uncharged neutral component. The ion and neutral components are then detected. The time of flight of the components are recorded and can be used to analyze the predecessor of the components, and therefore the specimen material. When more than one ion particle is emitted from the specimen per single ion impact, the single event time of flight mass spectrometer described here furnis This invention was made with Government support under Contract No. W-7405-ENG82 awarded by the Department of Energy. The Government has certain rights in the invention.

  18. Single Quantum Dot with Microlens and 3D-Printed Micro-objective as Integrated Bright Single-Photon Source

    PubMed Central

    2017-01-01

    Integrated single-photon sources with high photon-extraction efficiency are key building blocks for applications in the field of quantum communications. We report on a bright single-photon source realized by on-chip integration of a deterministic quantum dot microlens with a 3D-printed multilens micro-objective. The device concept benefits from a sophisticated combination of in situ 3D electron-beam lithography to realize the quantum dot microlens and 3D femtosecond direct laser writing for creation of the micro-objective. In this way, we obtain a high-quality quantum device with broadband photon-extraction efficiency of (40 ± 4)% and high suppression of multiphoton emission events with g(2)(τ = 0) < 0.02. Our results highlight the opportunities that arise from tailoring the optical properties of quantum emitters using integrated optics with high potential for the further development of plug-and-play fiber-coupled single-photon sources. PMID:28670600

  19. Sequential evaporation of water molecules from protonated water clusters: measurement of the velocity distributions of the evaporated molecules and statistical analysis.

    PubMed

    Berthias, F; Feketeová, L; Abdoul-Carime, H; Calvo, F; Farizon, B; Farizon, M; Märk, T D

    2018-06-22

    Velocity distributions of neutral water molecules evaporated after collision induced dissociation of protonated water clusters H+(H2O)n≤10 were measured using the combined correlated ion and neutral fragment time-of-flight (COINTOF) and velocity map imaging (VMI) techniques. As observed previously, all measured velocity distributions exhibit two contributions, with a low velocity part identified by statistical molecular dynamics (SMD) simulations as events obeying the Maxwell-Boltzmann statistics and a high velocity contribution corresponding to non-ergodic events in which energy redistribution is incomplete. In contrast to earlier studies, where the evaporation of a single molecule was probed, the present study is concerned with events involving the evaporation of up to five water molecules. In particular, we discuss here in detail the cases of two and three evaporated molecules. Evaporation of several water molecules after CID can be interpreted in general as a sequential evaporation process. In addition to the SMD calculations, a Monte Carlo (MC) based simulation was developed allowing the reconstruction of the velocity distribution produced by the evaporation of m molecules from H+(H2O)n≤10 cluster ions using the measured velocity distributions for singly evaporated molecules as the input. The observed broadening of the low-velocity part of the distributions for the evaporation of two and three molecules as compared to the width for the evaporation of a single molecule results from the cumulative recoil velocity of the successive ion residues as well as the intrinsically broader distributions for decreasingly smaller parent clusters. Further MC simulations were carried out assuming that a certain proportion of non-ergodic events is responsible for the first evaporation in such a sequential evaporation series, thereby allowing to model the entire velocity distribution.

  20. Impact of dust deposition on the albedo of Vatnajökull ice cap, Iceland

    NASA Astrophysics Data System (ADS)

    Wittmann, Monika; Dorothea Groot Zwaaftink, Christine; Steffensen Schmidt, Louise; Guðmundsson, Sverrir; Pálsson, Finnur; Arnalds, Olafur; Björnsson, Helgi; Thorsteinsson, Throstur; Stohl, Andreas

    2017-03-01

    Deposition of small amounts of airborne dust on glaciers causes positive radiative forcing and enhanced melting due to the reduction of surface albedo. To study the effects of dust deposition on the mass balance of Brúarjökull, an outlet glacier of the largest ice cap in Iceland, Vatnajökull, a study of dust deposition events in the year 2012 was carried out. The dust-mobilisation module FLEXDUST was used to calculate spatio-temporally resolved dust emissions from Iceland and the dispersion model FLEXPART was used to simulate atmospheric dust dispersion and deposition. We used albedo measurements at two automatic weather stations on Brúarjökull to evaluate the dust impacts. Both stations are situated in the accumulation area of the glacier, but the lower station is close to the equilibrium line. For this site ( ˜ 1210 m a.s.l.), the dispersion model produced 10 major dust deposition events and a total annual deposition of 20.5 g m-2. At the station located higher on the glacier ( ˜ 1525 m a.s.l.), the model produced nine dust events, with one single event causing ˜ 5 g m-2 of dust deposition and a total deposition of ˜ 10 g m-2 yr-1. The main dust source was found to be the Dyngjusandur floodplain north of Vatnajökull; northerly winds prevailed 80 % of the time at the lower station when dust events occurred. In all of the simulated dust events, a corresponding albedo drop was observed at the weather stations. The influence of the dust on the albedo was estimated using the regional climate model HIRHAM5 to simulate the albedo of a clean glacier surface without dust. By comparing the measured albedo to the modelled albedo, we determine the influence of dust events on the snow albedo and the surface energy balance. We estimate that the dust deposition caused an additional 1.1 m w.e. (water equivalent) of snowmelt (or 42 % of the 2.8 m w.e. total melt) compared to a hypothetical clean glacier surface at the lower station, and 0.6 m w.e. more melt (or 38 % of the 1.6 m w.e. melt in total) at the station located further upglacier. Our findings show that dust has a strong influence on the mass balance of glaciers in Iceland.

  1. A SEU-Hard Flip-Flop for Antifuse FPGAs

    NASA Technical Reports Server (NTRS)

    Katz, R.; Wang, J. J.; McCollum, J.; Cronquist, B.; Chan, R.; Yu, D.; Kleyner, I.; Day, John H. (Technical Monitor)

    2001-01-01

    A single event upset (SEU)-hardened flip-flop has been designed and developed for antifuse Field Programmable Gate Array (FPGA) application. Design and application issues, testability, test methods, simulation, and results are discussed.

  2. Resource Contention Management in Parallel Systems

    DTIC Science & Technology

    1989-04-01

    technical competence include communications, command and control, battle management, information processing, surveillance sensors, intelligence data ...two-simulation approach since they require only a single simulation run. More importantly, since they involve only observed data , they may also be...we use the original, unobservable RAC of Section 2 and handle un- observable transitions by generating artifcial events, when required, using a random

  3. Exploring variations of earthquake moment on patches with heterogeneous strength

    NASA Astrophysics Data System (ADS)

    Lin, Y. Y.; Lapusta, N.

    2016-12-01

    Finite-fault inversions show that earthquake slip is typically non-uniform over the ruptured region, likely due to heterogeneity of the earthquake source. Observations also show that events from the same fault area can have the same source duration but different magnitude ranging from 0.0 to 2.0 (Lin et al., GJI, 2016). Strong heterogeneity in strength over a patch could provide a potential explanation of such behavior, with the event duration controlled by the size of the patch and event magnitude determined by how much of the patch area has been ruptured. To explore this possibility, we numerically simulate earthquake sequences on a rate-and-state fault, with a seismogenic patch governed by steady-state velocity-weakening friction surrounded by a steady-state velocity-strengthening region. The seismogenic patch contains strong variations in strength due to variable normal stress. Our long-term simulations of slip in this model indeed generate sequences of earthquakes of various magnitudes. In some seismic events, dynamic rupture cannot overcome areas with higher normal strength, and smaller events result. When the higher-strength areas are loaded by previous slip and rupture, larger events result, as expected. Our current work is directed towards exploring a range of such models, determining the variability in the seismic moment that they can produce, and determining the observable properties of the resulting events.

  4. Observations and Measurements of Dust Transport from the Patagonia Desert into the South Atlantic Ocean in 2004 and 2005

    NASA Astrophysics Data System (ADS)

    Gasso, S.; Gaiero, D. M.; Villoslada, B.; Liske, E.

    2005-12-01

    The largest continental landmass south of the 40-degree parallel and potentially one of the largest sources of dust into the Southern Ocean (SO) is the Patagonia desert. Most of the estimates of dust outflow and deposition from this region into the South Atlantic Ocean are based on model simulations. However, there are very few measurements available that can corroborate these estimates. Satellite assessments of dust activity offer conflicting views. For example, monthly time series of satellite-derived (e.g. AVHRR and MODIS) aerosol optical depth (AOD) indicate that dust activity is minimal. However, a study with the TOMS Aerosol Index (Prospero et al., 2002) showed that the frequency of dust events is in the range of 7-14 days/month during the years 1978 through 1993. In addition, surface visibility observations along the Patagonian coast confirm that ocean-going dust events do occur during the summer and spring months. These discrepancies indicate fundamental uncertainties regarding the frequency and extent of dust activity in Patagonia. Given that the SO is the largest high-chlorophyll, low-nutrient area in the world and that the flux of nutrient-rich dust has the potential to modify biological activity with possible climatic consequences, it is of interest to have a better understanding of how often and intense are dust events in the Patagonia region. We surveyed the reports of dust activity from surface weather stations in the Patagonia region during the period June, 2004 to April, 2005. These observations were compared with simultaneous MODIS true color pictures and the corresponding aerosol retrievals. In addition, measurements of vertical and horizontal dust flux were collected by dust samplers at four sites along the coast. The horizontal flux measurements were compared with the same estimates derived from MODIS. According to the true color pictures and confirmed by the surface visibility observations, we recorded at least 16 ocean-going dust events. The scale of the events varied from small (single dust plumes along the coast) to large (dust front extending ~600 km). Most of the large events occurred during the late summer. Due to the presence of sun glint, cloud obstruction, or coastal sediments, the MODIS automatic aerosol algorithm did not derive AODs in many instances and, as result, many events were not recorded in the MODIS monthly database. Dust sources are numerous and dust plumes outflow at any place along the coastline (> 1000 km) including some very active sources as far south as in the Tierra del Fuego Island (54S). The main sources identified are coastal saltbeds, inland deflation hollows and receding shores of large lakes. Although some of major emitting points have been included as sources in dust models, there are some notable exceptions, for example most of the coastal sources. We note, in addition, that the scale and diversity of the different sources pose significant challenges with respect to parameterization in global models of dust dispersion.

  5. Using Adjoint Methods to Improve 3-D Velocity Models of Southern California

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Tape, C.; Maggi, A.; Tromp, J.

    2006-12-01

    We use adjoint methods popular in climate and ocean dynamics to calculate Fréchet derivatives for tomographic inversions in southern California. The Fréchet derivative of an objective function χ(m), where m denotes the Earth model, may be written in the generic form δχ=int Km(x) δln m(x) d3x, where δln m=δ m/m denotes the relative model perturbation. For illustrative purposes, we construct the 3-D finite-frequency banana-doughnut kernel Km, corresponding to the misfit of a single traveltime measurement, by simultaneously computing the 'adjoint' wave field s† forward in time and reconstructing the regular wave field s backward in time. The adjoint wave field is produced by using the time-reversed velocity at the receiver as a fictitious source, while the regular wave field is reconstructed on the fly by propagating the last frame of the wave field saved by a previous forward simulation backward in time. The approach is based upon the spectral-element method, and only two simulations are needed to produce density, shear-wave, and compressional-wave sensitivity kernels. This method is applied to the SCEC southern California velocity model. Various density, shear-wave, and compressional-wave sensitivity kernels are presented for different phases in the seismograms. We also generate 'event' kernels for Pnl, S and surface waves, which are the Fréchet kernels of misfit functions that measure the P, S or surface wave traveltime residuals at all the receivers simultaneously for one particular event. Effectively, an event kernel is a sum of weighted Fréchet kernels, with weights determined by the associated traveltime anomalies. By the nature of the 3-D simulation, every event kernel is also computed based upon just two simulations, i.e., its construction costs the same amount of computation time as an individual banana-doughnut kernel. One can think of the sum of the event kernels for all available earthquakes, called the 'misfit' kernel, as a graphical representation of the gradient of the misfit function. With the capability of computing both the value of the misfit function and its gradient, which assimilates the traveltime anomalies, we are ready to use a non-linear conjugate gradient algorithm to iteratively improve velocity models of southern California.

  6. Optimized breeding strategies for multiple trait integration: I. Minimizing linkage drag in single event introgression.

    PubMed

    Peng, Ting; Sun, Xiaochun; Mumm, Rita H

    2014-01-01

    From a breeding standpoint, multiple trait integration (MTI) is a four-step process of converting an elite variety/hybrid for value-added traits (e.g. transgenic events) using backcross breeding, ultimately regaining the performance attributes of the target hybrid along with reliable expression of the value-added traits. In the light of the overarching goal of recovering equivalent performance in the finished conversion, this study focuses on the first step of MTI, single event introgression, exploring the feasibility of marker-aided backcross conversion of a target maize hybrid for 15 transgenic events, incorporating eight events into the female hybrid parent and seven into the male parent. Single event introgression is conducted in parallel streams to convert the recurrent parent (RP) for individual events, with the primary objective of minimizing residual non-recurrent parent (NRP) germplasm, especially in the chromosomal proximity to the event (i.e. linkage drag). In keeping with a defined lower limit of 96.66 % overall RP germplasm recovery (i.e. ≤120 cM NRP germplasm given a genome size of 1,788 cM), a breeding goal for each of the 15 single event conversions was developed: <8 cM of residual NRP germplasm across the genome with ~1 cM in the 20 cM region flanking the event. Using computer simulation, we aimed to identify optimal breeding strategies for single event introgression to achieve this breeding goal, measuring efficiency in terms of number of backcross generations required, marker data points needed, and total population size across generations. Various selection schemes classified as three-stage, modified two-stage, and combined selection conducted from BC1 through BC3, BC4, or BC5 were compared. The breeding goal was achieved with a selection scheme involving five generations of marker-aided backcrossing, with BC1 through BC3 selected for the event of interest and minimal linkage drag at population size of 600, and BC4 and BC5 selected for the event of interest and recovery of the RP germplasm across the genome at population size of 400, with selection intensity of 0.01 for all generations. In addition, strategies for choice of donor parent to facilitate conversion efficiency and quality were evaluated. Two essential criteria for choosing an optimal donor parent for a given RP were established: introgression history showing reduction of linkage drag to ~1 cM in the 20 cM region flanking the event and genetic similarity between the RP and potential donor parents. Computer simulation demonstrated that single event conversions with <8 cM residual NRP germplasm can be accomplished by BC5 with no genetic similarity, by BC4 with 30 % genetic similarity, and by BC3 with 86 % genetic similarity using previously converted RPs as event donors. This study indicates that MTI to produce a 'quality' 15-event-stacked hybrid conversion is achievable. Furthermore, it lays the groundwork for a comprehensive approach to MTI by outlining a pathway to produce appropriate starting materials with which to proceed with event pyramiding and trait fixation before version testing.

  7. Using sea surface temperatures to improve performance of single dynamical downscaling model in flood simulation under climate change

    NASA Astrophysics Data System (ADS)

    Chao, Y.; Cheng, C. T.; Hsiao, Y. H.; Hsu, C. T.; Yeh, K. C.; Liu, P. L.

    2017-12-01

    There are 5.3 typhoons hit Taiwan per year on average in last decade. Typhoon Morakot in 2009, the most severe typhoon, causes huge damage in Taiwan, including 677 casualties and roughly NT 110 billion (3.3 billion USD) in economic loss. Some researches documented that typhoon frequency will decrease but increase in intensity in western North Pacific region. It is usually preferred to use high resolution dynamical model to get better projection of extreme events; because coarse resolution models cannot simulate intense extreme events. Under that consideration, dynamical downscaling climate data was chosen to describe typhoon satisfactorily, this research used the simulation data from AGCM of Meteorological Research Institute (MRI-AGCM). Considering dynamical downscaling methods consume massive computing power, and typhoon number is very limited in a single model simulation, using dynamical downscaling data could cause uncertainty in disaster risk assessment. In order to improve the problem, this research used four sea surfaces temperatures (SSTs) to increase the climate change scenarios under RCP 8.5. In this way, MRI-AGCMs project 191 extreme typhoons in Taiwan (when typhoon center touches 300 km sea area of Taiwan) in late 21th century. SOBEK, a two dimensions flood simulation model, was used to assess the flood risk under four SSTs climate change scenarios in Tainan, Taiwan. The results show the uncertainty of future flood risk assessment is significantly decreased in Tainan, Taiwan in late 21th century. Four SSTs could efficiently improve the problems of limited typhoon numbers in single model simulation.

  8. Simulations of cloud-radiation interaction using large-scale forcing derived from the CINDY/DYNAMO northern sounding array

    DOE PAGES

    Wang, Shuguang; Sobel, Adam H.; Fridlind, Ann; ...

    2015-09-25

    The recently completed CINDY/DYNAMO field campaign observed two Madden-Julian oscillation (MJO) events in the equatorial Indian Ocean from October to December 2011. Prior work has indicated that the moist static energy anomalies in these events grew and were sustained to a significant extent by radiative feedbacks. We present here a study of radiative fluxes and clouds in a set of cloud-resolving simulations of these MJO events. The simulations are driven by the large scale forcing dataset derived from the DYNAMO northern sounding array observations, and carried out in a doubly-periodic domain using the Weather Research and Forecasting (WRF) model. simulatedmore » cloud properties and radiative fluxes are compared to those derived from the S-Polka radar and satellite observations. Furthermore, to accommodate the uncertainty in simulated cloud microphysics, a number of single moment (1M) and double moment (2M) microphysical schemes in the WRF model are tested.« less

  9. Laboratory investigation of flux reduction from dense non-aqueous phase liquid (DNAPL) partial source zone remediation by enhanced dissolution

    NASA Astrophysics Data System (ADS)

    Kaye, Andrew J.; Cho, Jaehyun; Basu, Nandita B.; Chen, Xiaosong; Annable, Michael D.; Jawitz, James W.

    2008-11-01

    This study investigated the benefits of partial removal of dense nonaqueous phase liquid (DNAPL) source zones using enhanced dissolution in eight laboratory scale experiments. The benefits were assessed by characterizing the relationship between reductions in DNAPL mass and the corresponding reduction in contaminant mass flux. Four flushing agents were evaluated in eight controlled laboratory experiments to examine the effects of displacement fluid property contrasts and associated override and underride on contaminant flux reduction ( Rj) vs. mass reduction ( Rm) relationships ( Rj( Rm)): 1) 50% ethanol/50% water (less dense than water), 2) 40% ethyl-lactate/60% water (more dense than water), 3) 18% ethanol/26% ethyl-lactate/56% water (neutrally buoyant), and 4) 2% Tween-80 surfactant (also neutrally buoyant). For each DNAPL architecture evaluated, replicate experiments were conducted where source zone dissolution was conducted with a single flushing event to remove most of the DNAPL from the system, and with multiple shorter-duration floods to determine the path of the Rj( Rm) relationship. All of the single-flushing experiments exhibited similar Rj( Rm) relationships indicating that override and underride effects associated with cosolvents did not significantly affect the remediation performance of the agents. The Rj( Rm) relationship of the multiple injection experiments for the cosolvents with a density contrast with water tended to be less desirable in the sense that there was less Rj for a given Rm. UTCHEM simulations supported the observations from the laboratory experiments and demonstrated the capability of this model to predict Rj( Rm) relationships for non-uniformly distributed NAPL sources.

  10. Analysis of dangerous area of single berth oil tanker operations based on CFD

    NASA Astrophysics Data System (ADS)

    Shi, Lina; Zhu, Faxin; Lu, Jinshu; Wu, Wenfeng; Zhang, Min; Zheng, Hailin

    2018-04-01

    Based on the single process in the liquid cargo tanker berths in the state as the research object, we analyzed the single berth oil tanker in the process of VOCs diffusion theory, built network model of VOCs diffusion with Gambit preprocessor, set up the simulation boundary conditions and simulated the five detection point sources in specific factors under the influence of VOCs concentration change with time by using Fluent software. We analyzed the dangerous area of single berth oil tanker operations through the diffusion of VOCs, so as to ensure the safe operation of oil tanker.

  11. 3D Modeling of Strong Ground Motion in the Pacific Northwest From Large Earthquakes in the Cascadia Subduction Zone

    NASA Astrophysics Data System (ADS)

    Olsen, K. B.; Geisselmeyer, A.; Stephenson, W. J.; Mai, P. M.

    2007-12-01

    The Cascadia subduction zone in the Pacific Northwest, USA, generates Great (megathrust) earthquakes with a recurrence period of about 500 years, most recently the M~9 event on January 26, 1700. Since no earthquake of such magnitude has occurred in the Pacific Northwest since the deployment of strong ground motion instruments, a large uncertainty is associated with the ground motions expected from such event. To decrease this uncertainty, we have carried out the first 3D simulations of megathrust earthquakes (Mw8.5 and Mw9.0) rupturing along the Cascadia subduction zone. The simulations were carried out in a recently developed 3D velocity model of the region of dimensions 1050 km by 550 km, discretized into 2 billion 250 m3 cubes with a minimum S-wave velocity of 625 m/s. The model includes the subduction slab, accretionary sediments, local sedimentary basins, and the ocean layer. About 6 minutes of wave propagation for each scenario consumed about 24 Wall-clock hours using a parallel fourth-order finite-difference method with 1600 processors on the San Diego Supercomputer Center Datastar supercomputer. The source descriptions for the Mw9.0 scenarios were designed by mapping the inversion results for the December 26, 2004 M9+ Sumatra-Andaman Islands earthquake (Ji, 2006) onto a 950 km by 150 km large rupture for the Pacific Northwest model. Simulations were carried out for hypocenters located toward the northern and southern ends of the subduction zone. In addition, we simulated two M8.5 events with a source area of 275 km by 150 km located in the northern and central parts of the model area. The sources for the M8.5 events were generated using the pseudo-dynamic model by Guatteri et al. (2004). All sources used spatially-variable slip, rise time and rupture velocity. Three major metropolitan areas are located in the model region, namely Seattle (3 million+ people), Vancouver (2 million+ people), and Portland (2 million+ people), all located above sedimentary basins amplifying the waves incident from the subduction zone. The estimated peak ground velocities (PGVs) for frequencies less than 0.5 Hz vary significantly with the assumed rise time. Using a mean rise of 32 s, as estimated from source inversion of the 2004 M9+ Sumatra-Andeman event (Ji, 2006), PGVs reached 40 cm/s in Seattle and 10 cm/s in Vancouver and Portland. However, if the mean rise time is decreased to about 14 s, as suggested by the empirical regression by Somerville et al. (1999), PGVs are increased by 2-3 times at these locations. For the Mw8.5 events, PGVs would reach about 10 cm/s in Seattle, and about 5 cm/s in Vancouver and Portland. Combined with extended duration of the shaking exceeding 1 minute for the Mw8.5 events and 2 minutes for the Mw9 events, these long-period ground motions may inflict significant damage on the built environment, in particular on the highrises in downtown Seattle. However, the strongest shaking arrives 1-2 minutes after the earthquake nucleates, indicating that an early warning system in place may help mitigate loss of life in case of a megathrust earthquake in the Pacific Northwest. Additional efforts should analyse the simulated displacements on the ocean bottom for tsunami generation potential.

  12. Improving Energy Efficiency for the Vehicle Assembly Industry: A Discrete Event Simulation Approach

    NASA Astrophysics Data System (ADS)

    Oumer, Abduaziz; Mekbib Atnaw, Samson; Kie Cheng, Jack; Singh, Lakveer

    2016-11-01

    This paper presented a Discrete Event Simulation (DES) model for investigating and improving energy efficiency in vehicle assembly line. The car manufacturing industry is one of the highest energy consuming industries. Using Rockwell Arena DES package; a detailed model was constructed for an actual vehicle assembly plant. The sources of energy considered in this research are electricity and fuel; which are the two main types of energy sources used in a typical vehicle assembly plant. The model depicts the performance measurement for process- specific energy measures of painting, welding, and assembling processes. Sound energy efficiency model within this industry has two-fold advantage: reducing CO2 emission and cost reduction associated with fuel and electricity consumption. The paper starts with an overview of challenges in energy consumption within the facilities of automotive assembly line and highlights the parameters for energy efficiency. The results of the simulation model indicated improvements for energy saving objectives and reduced costs.

  13. Simulation of metals transport and toxicity at a mine-impacted watershed: California Gulch, Colorado.

    PubMed

    Velleux, Mark L; Julien, Pierre Y; Rojas-Sanchez, Rosalia; Clements, William H; England, John F

    2006-11-15

    The transport and toxicity of metals at the California Gulch, Colorado mine-impacted watershed were simulated with a spatially distributed watershed model. Using a database of observations for the period 1984-2004, hydrology, sediment transport, and metals transport were simulated for a June 2003 calibration event and a September 2003 validation event. Simulated flow volumes were within approximately 10% of observed conditions. Observed ranges of total suspended solids, cadmium, copper, and zinc concentrations were also successfully simulated. The model was then used to simulate the potential impacts of a 1-in-100-year rainfall event. Driven by large flows and corresponding soil and sediment erosion for the 1-in-100-year event, estimated solids and metals export from the watershed is 10,000 metric tons for solids, 215 kg for Cu, 520 kg for Cu, and 15,300 kg for Zn. As expressed by the cumulative criterion unit (CCU) index, metals concentrations far exceed toxic effects thresholds, suggesting a high probability of toxic effects downstream of the gulch. More detailed Zn source analyses suggest that much of the Zn exported from the gulch originates from slag piles adjacent to the lower gulch floodplain and an old mining site located near the head of the lower gulch.

  14. Multi-scale comparison of source parameter estimation using empirical Green's function approach

    NASA Astrophysics Data System (ADS)

    Chen, X.; Cheng, Y.

    2015-12-01

    Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.

  15. Titan2D simulations of dome-collapse pyroclastic flows for crisis assessments on Montserrat

    NASA Astrophysics Data System (ADS)

    Widiwijayanti, C.; Voight, B.; Hidayat, D.; Patra, A.; Pitman, E.

    2010-12-01

    The Soufriere Hills Volcano (SHV), Montserrat, has experienced numerous episodes of lava dome collapses since 1995. Collapse volumes range from small rockfalls to major dome collapses (as much as ~200 M m3). Problems arise in hazards mitigation, particularly in zoning for populated areas. Determining the likely extent of flowage deposits in various scenarios is important for hazards zonation, provision of advice by scientists, and decision making by public officials. Towards resolution of this issue we have tested the TITAN2D code, calibrated parameters for an SHV database, and using updated topography have provided flowage maps for various scenarios and volume classes from SHV, for use in hazards assessments. TITAN2D is a map plane (depth averaged) simulator of granular flow and yields mass distributions over a DEM. Two Coulomb frictional parameters (basal and internal frictions) and initial source conditions (volume, source location, and source geometry) of single or multiple pulses in a dome-collapse type event control behavior of the flow. Flow kinematics are captured, so that the dynamics of flow can be examined spatially from frame to frame, or as a movie. Our hazard maps include not only the final deposit, but also areas inundated by moving debris prior to deposition. Simulations from TITAN2D were important for analysis of crises in the period 2007-2010. They showed that any very large mass released on the north slope would be strongly partitioned by local topography, and thus it was doubtful that flows of very large size (>20 M m3) could be generated in the Belham River drainage. This partitioning effect limited runout toward populated areas. These effects were interpreted to greatly reduce the down-valley risk of ash-cloud surges.

  16. The Gravitational Process Path (GPP) model (v1.0) - a GIS-based simulation framework for gravitational processes

    NASA Astrophysics Data System (ADS)

    Wichmann, Volker

    2017-09-01

    The Gravitational Process Path (GPP) model can be used to simulate the process path and run-out area of gravitational processes based on a digital terrain model (DTM). The conceptual model combines several components (process path, run-out length, sink filling and material deposition) to simulate the movement of a mass point from an initiation site to the deposition area. For each component several modeling approaches are provided, which makes the tool configurable for different processes such as rockfall, debris flows or snow avalanches. The tool can be applied to regional-scale studies such as natural hazard susceptibility mapping but also contains components for scenario-based modeling of single events. Both the modeling approaches and precursor implementations of the tool have proven their applicability in numerous studies, also including geomorphological research questions such as the delineation of sediment cascades or the study of process connectivity. This is the first open-source implementation, completely re-written, extended and improved in many ways. The tool has been committed to the main repository of the System for Automated Geoscientific Analyses (SAGA) and thus will be available with every SAGA release.

  17. Single-channel mixed signal blind source separation algorithm based on multiple ICA processing

    NASA Astrophysics Data System (ADS)

    Cheng, Xiefeng; Li, Ji

    2017-01-01

    Take separating the fetal heart sound signal from the mixed signal that get from the electronic stethoscope as the research background, the paper puts forward a single-channel mixed signal blind source separation algorithm based on multiple ICA processing. Firstly, according to the empirical mode decomposition (EMD), the single-channel mixed signal get multiple orthogonal signal components which are processed by ICA. The multiple independent signal components are called independent sub component of the mixed signal. Then by combining with the multiple independent sub component into single-channel mixed signal, the single-channel signal is expanded to multipath signals, which turns the under-determined blind source separation problem into a well-posed blind source separation problem. Further, the estimate signal of source signal is get by doing the ICA processing. Finally, if the separation effect is not very ideal, combined with the last time's separation effect to the single-channel mixed signal, and keep doing the ICA processing for more times until the desired estimated signal of source signal is get. The simulation results show that the algorithm has good separation effect for the single-channel mixed physiological signals.

  18. Weather and extremes in the last Millennium - a challenge for climate modelling

    NASA Astrophysics Data System (ADS)

    Raible, Christoph C.; Blumer, Sandro R.; Gomez-Navarro, Juan J.; Lehner, Flavio

    2015-04-01

    Changes in the climate mean state are expected to influence society, but the socio-economic sensitivity to extreme events might be even more severe. Whether or not the current frequency and severity of extreme events is a unique characteristic of anthropogenic-driven climate change can be assessed by putting the observed changes in a long-term perspective. In doing so, early instrumental series and proxy archives are a rich source to investigate also extreme events, in particular during the last millennium, yet they suffer from spatial and temporal scarcity. Therefore, simulations with coupled general circulation models (GCMs) could fill such gaps and help in deepening our process understanding. In this study, an overview of past and current efforts as well as challenges in modelling paleo weather and extreme events is presented. Using simulations of the last millennium we investigate extreme midlatitude cyclone characteristics, precipitation, and their connection to large-scale atmospheric patterns in the North Atlantic European region. In cold climate states such as the Maunder Minimum, the North Atlantic Oscillation (NAO) is found to be predominantly in its negative phase. In this sense, simulations of different models agree with proxy findings for this period. However, some proxy data available for this period suggests an increase in storminess during this period, which could be interpreted as a positive phase of the NAO - a superficial contradiction. The simulated cyclones are partly reduced over Europe, which is consistent with the aforementioned negative phase of the NAO. However, as the meridional temperature gradient is increased during this period - which constitutes a source of low-level baroclincity - they also intensify. This example illustrates how model simulations could be used to improve our proxy interpretation and to gain additional process understanding. Nevertheless, there are also limitations associated with climate modeling efforts to simulate the last millennium. In particular, these models still struggle to properly simulate atmospheric blocking events, an important dynamical feature for dry conditions during summer times. Finally, new and promising ways in improving past climate modelling are briefly introduced. In particular, the use of dynamical downscaling is a powerful tool to bridge the gap between the coarsely resolved GCMs and characteristics of the regional climate, which is potentially recorded in proxy archives. In particular, the representation of extreme events could be improved by dynamical downscaling as processes are better resolved than GCMs.

  19. Active Vibration Control for Helicopter Interior Noise Reduction Using Power Minimization

    NASA Technical Reports Server (NTRS)

    Mendoza, J.; Chevva, K.; Sun, F.; Blanc, A.; Kim, S. B.

    2014-01-01

    This report describes work performed by United Technologies Research Center (UTRC) for NASA Langley Research Center (LaRC) under Contract NNL11AA06C. The objective of this program is to develop technology to reduce helicopter interior noise resulting from multiple gear meshing frequencies. A novel active vibration control approach called Minimum Actuation Power (MAP) is developed. MAP is an optimal control strategy that minimizes the total input power into a structure by monitoring and varying the input power of controlling sources. MAP control was implemented without explicit knowledge of the phasing and magnitude of the excitation sources by driving the real part of the input power from the controlling sources to zero. It is shown that this occurs when the total mechanical input power from the excitation and controlling sources is a minimum. MAP theory is developed for multiple excitation sources with arbitrary relative phasing for single or multiple discrete frequencies and controlled by a single or multiple controlling sources. Simulations and experimental results demonstrate the feasibility of MAP for structural vibration reduction of a realistic rotorcraft interior structure. MAP control resulted in significant average global vibration reduction of a single frequency and multiple frequency excitations with one controlling actuator. Simulations also demonstrate the potential effectiveness of the observed vibration reductions on interior radiated noise.

  20. Simulating variable source problems via post processing of individual particle tallies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.

    2000-10-20

    Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source formore » optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source.« less

  1. Discrete-Event Simulation Models of Plasmodium falciparum Malaria

    PubMed Central

    McKenzie, F. Ellis; Wong, Roger C.; Bossert, William H.

    2008-01-01

    We develop discrete-event simulation models using a single “timeline” variable to represent the Plasmodium falciparum lifecycle in individual hosts and vectors within interacting host and vector populations. Where they are comparable our conclusions regarding the relative importance of vector mortality and the durations of host immunity and parasite development are congruent with those of classic differential-equation models of malaria, epidemiology. However, our results also imply that in regions with intense perennial transmission, the influence of mosquito mortality on malaria prevalence in humans may be rivaled by that of the duration of host infectivity. PMID:18668185

  2. NASA TileWorld manual (system version 2.2)

    NASA Technical Reports Server (NTRS)

    Philips, Andrew B.; Bresina, John L.

    1991-01-01

    The commands are documented of the NASA TileWorld simulator, as well as providing information about how to run it and extend it. The simulator, implemented in Common Lisp with Common Windows, encodes a particular range in a spectrum of domains, for controllable research experiments. TileWorld consists of a two dimensional grid of cells, a set of polygonal tiles, and a single agent which can grasp and move tiles. In addition to agent executable actions, there is an external event over which the agent has not control; this event correspond to a 'gust of wind'.

  3. Quantifying Intrinsic Variability of Sagittarius A* Using Closure Phase Measurements of the Event Horizon Telescope

    NASA Astrophysics Data System (ADS)

    Roelofs, Freek; Johnson, Michael D.; Shiokawa, Hotaka; Doeleman, Sheperd S.; Falcke, Heino

    2017-09-01

    General relativistic magnetohydrodynamic (GRMHD) simulations of accretion disks and jets associated with supermassive black holes show variability on a wide range of timescales. On timescales comparable to or longer than the gravitational timescale {t}G={GM}/{c}3, variation may be dominated by orbital dynamics of the inhomogeneous accretion flow. Turbulent evolution within the accretion disk is expected on timescales comparable to the orbital period, typically an order of magnitude larger than t G . For Sgr A*, t G is much shorter than the typical duration of a VLBI experiment, enabling us to study this variability within a single observation. Closure phases, the sum of interferometric visibility phases on a triangle of baselines, are particularly useful for studying this variability. In addition to a changing source structure, variations in observed closure phase can also be due to interstellar scattering, thermal noise, and the changing geometry of projected baselines over time due to Earth rotation. We present a metric that is able to distinguish the latter two from intrinsic or scattering variability. This metric is validated using synthetic observations of GRMHD simulations of Sgr A*. When applied to existing multi-epoch EHT data of Sgr A*, this metric shows that the data are most consistent with source models containing intrinsic variability from source dynamics, interstellar scattering, or a combination of those. The effects of black hole inclination, orientation, spin, and morphology (disk or jet) on the expected closure phase variability are also discussed.

  4. Inhalation exposure to cleaning products: application of a two-zone model.

    PubMed

    Earnest, C Matt; Corsi, Richard L

    2013-01-01

    In this study, modifications were made to previously applied two-zone models to address important factors that can affect exposures during cleaning tasks. Specifically, we expand on previous applications of the two-zone model by (1) introducing the source in discrete elements (source-cells) as opposed to a complete instantaneous release, (2) placing source cells in both the inner (near person) and outer zones concurrently, (3) treating each source cell as an independent mixture of multiple constituents, and (4) tracking the time-varying liquid concentration and emission rate of each constituent in each source cell. Three experiments were performed in an environmentally controlled chamber with a thermal mannequin and a simplified pure chemical source to simulate emissions from a cleaning product. Gas phase concentration measurements were taken in the bulk air and in the breathing zone of the mannequin to evaluate the model. The mean ratio of the integrated concentration in the mannequin's breathing zone to the concentration in the outer zone was 4.3 (standard deviation, σ = 1.6). The mean ratio of measured concentration in the breathing zone to predicted concentrations in the inner zone was 0.81 (σ = 0.16). Intake fractions ranged from 1.9 × 10(-3) to 2.7 × 10(-3). Model results reasonably predict those of previous exposure monitoring studies and indicate the inadequacy of well-mixed single-zone model applications for some but not all cleaning events.

  5. Ground-Motion Variability for a Strike-Slip Earthquake from Broadband Ground-Motion Simulations

    NASA Astrophysics Data System (ADS)

    Iwaki, A.; Maeda, T.; Morikawa, N.; Fujiwara, H.

    2016-12-01

    One of the important issues in seismic hazard analysis is the evaluation of ground-motion variability due to the epistemic and aleatory uncertainties in various aspects of ground-motion simulations. This study investigates the within-event ground-motion variability in broadband ground-motion simulations for strike-slip events. We conduct ground-motion simulations for a past event (2000 MW6.6 Tottori earthquake) using a set of characterized source models (e.g. Irikura and Miyake, 2011) considering aleatory variability. Broadband ground motion is computed by a hybrid approach that combines a 3D finite-difference method (> 1 s) and the stochastic Green's function method (< 1 s), using the 3D velocity model J-SHIS v2. We consider various locations of the asperities, which are defined as the regions with large slip and stress drop within the fault, and the rupture nucleation point (hypocenter). Ground motion records at 29 K-NET and KiK-net stations are used to validate our simulations. By comparing the simulated and observed ground motion, we found that the performance of the simulations is acceptable under the condition that the source parameters are poorly constrained. In addition to the observation stations, we set 318 virtual receivers with the spatial intervals of 10 km for statistical analysis of the simulated ground motion. The maximum fault-distance is 160 km. Standard deviation (SD) of the simulated acceleration response spectra (Sa, 5% damped) of RotD50 component (Boore, 2010) is investigated at each receiver. SD from 50 different patterns of asperity locations is generally smaller than 0.15 in terms of log10 (0.34 in natural log). It shows dependence on distance at periods shorter than 1 s; SD increases as the distance decreases. On the other hand, SD from 39 different hypocenter locations is again smaller than 0.15 in log10, and showed azimuthal dependence at long periods; it increases as the rupture directivity parameter Xcosθ(Somerville et al. 1997) increases at periods longer than 1 s. The characteristics of ground-motion variability inferred from simulations can provide information on variability in simulation-based seismic hazard assessment for future earthquakes. We will further investigate the variability in other source parameters; rupture velocity and short-period level.

  6. Kinematic and Dynamic Source Rupture Scenario for Potential Megathrust Event along the Southernmost Ryukyu Trench

    NASA Astrophysics Data System (ADS)

    Lin, T. C.; Hu, F.; Chen, X.; Lee, S. J.; Hung, S. H.

    2017-12-01

    Kinematic source model is widely used for the simulation of an earthquake, because of its simplicity and ease of application. On the other hand, dynamic source model is a more complex but important tool that can help us to understand the physics of earthquake initiation, propagation, and healing. In this study, we focus on the southernmost Ryukyu Trench which is extremely close to northern Taiwan. Interseismic GPS data in northeast Taiwan shows a pattern of strain accumulation, which suggests the maximum magnitude of a potential future earthquake in this area is probably about magnitude 8.7. We develop dynamic rupture models for the hazard estimation of the potential megathrust event based on the kinematic rupture scenarios which are inverted using the interseismic GPS data. Besides, several kinematic source rupture scenarios with different characterized slip patterns are also considered to constrain the dynamic rupture process better. The initial stresses and friction properties are tested using the trial-and-error method, together with the plate coupling and tectonic features. An analysis of the dynamic stress field associated with the slip prescribed in the kinematic models can indicate possible inconsistencies with physics of faulting. Furthermore, the dynamic and kinematic rupture models are considered to simulate the ground shaking from based on 3-D spectral-element method. We analyze ShakeMap and ShakeMovie from the simulation results to evaluate the influence over the island between different source models. A dispersive tsunami-propagation simulation is also carried out to evaluate the maximum tsunami wave height along the coastal areas of Taiwan due to coseismic seafloor deformation of different source models. The results of this numerical simulation study can provide a physically-based information of megathrust earthquake scenario for the emergency response agency to take the appropriate action before the really big one happens.

  7. Importance of vesicle release stochasticity in neuro-spike communication.

    PubMed

    Ramezani, Hamideh; Akan, Ozgur B

    2017-07-01

    Aim of this paper is proposing a stochastic model for vesicle release process, a part of neuro-spike communication. Hence, we study biological events occurring in this process and use microphysiological simulations to observe functionality of these events. Since the most important source of variability in vesicle release probability is opening of voltage dependent calcium channels (VDCCs) followed by influx of calcium ions through these channels, we propose a stochastic model for this event, while using a deterministic model for other variability sources. To capture the stochasticity of calcium influx to pre-synaptic neuron in our model, we study its statistics and find that it can be modeled by a distribution defined based on Normal and Logistic distributions.

  8. Effect of Loss on Multiplexed Single-Photon Sources (Open Access Publisher’s Version)

    DTIC Science & Technology

    2015-04-28

    lossy components on near- and long-term experimental goals, we simulate themultiplexed sources when used formany- photon state generation under various...efficient integer factorization and digital quantum simulation [7, 8], which relies critically on the development of a high-performance, on-demand photon ...SPDC) or spontaneous four-wave mixing: parametric processes which use a pump laser in a nonlinearmaterial to spontaneously generate photon pairs

  9. Real-time monitoring of Lévy flights in a single quantum system

    NASA Astrophysics Data System (ADS)

    Issler, M.; Höller, J.; Imamoǧlu, A.

    2016-02-01

    Lévy flights are random walks where the dynamics is dominated by rare events. Even though they have been studied in vastly different physical systems, their observation in a single quantum system has remained elusive. Here we analyze a periodically driven open central spin system and demonstrate theoretically that the dynamics of the spin environment exhibits Lévy flights. For the particular realization in a single-electron charged quantum dot driven by periodic resonant laser pulses, we use Monte Carlo simulations to confirm that the long waiting times between successive nuclear spin-flip events are governed by a power-law distribution; the corresponding exponent η =-3 /2 can be directly measured in real time by observing the waiting time distribution of successive photon emission events. Remarkably, the dominant intrinsic limitation of the scheme arising from nuclear quadrupole coupling can be minimized by adjusting the magnetic field or by implementing spin echo.

  10. Measurement and Analysis of Multiple Output Transient Propagation in BJT Analog Circuits

    NASA Astrophysics Data System (ADS)

    Roche, Nicolas J.-H.; Khachatrian, A.; Warner, J. H.; Buchner, S. P.; McMorrow, D.; Clymer, D. A.

    2016-08-01

    The propagation of Analog Single Event Transients (ASETs) to multiple outputs of Bipolar Junction Transistor (BJTs) Integrated Circuits (ICs) is reported for the first time. The results demonstrate that ASETs can appear at several outputs of a BJT amplifier or comparator as a result of a single ion or single laser pulse strike at a single physical location on the chip of a large-scale integrated BJT analog circuit. This is independent of interconnect cross-talk or charge-sharing effects. Laser experiments, together with SPICE simulations and analysis of the ASET's propagation in the s-domain are used to explain how multiple-output transients (MOTs) are generated and propagate in the device. This study demonstrates that both the charge collection associated with an ASET and the ASET's shape, commonly used to characterize the propagation of SETs in devices and systems, are unable to explain quantitatively how MOTs propagate through an integrated analog circuit. The analysis methodology adopted here involves combining the Fourier transform of the propagating signal and the current-source transfer function in the s-domain. This approach reveals the mechanisms involved in the transient signal propagation from its point of generation to one or more outputs without the signal following a continuous interconnect path.

  11. Interpreting Space-Mission LET Requirements for SEGR in Power MOSFETs

    NASA Technical Reports Server (NTRS)

    Lauenstein, J. M.; Ladbury, R. L.; Batchelor, D. A.; Goldsman, N.; Kim, H. S.; Phan, A. M.

    2010-01-01

    A Technology Computer Aided Design (TCAD) simulation-based method is developed to evaluate whether derating of high-energy heavy-ion accelerator test data bounds the risk for single-event gate rupture (SEGR) from much higher energy on-orbit ions for a mission linear energy transfer (LET) requirement. It is shown that a typical derating factor of 0.75 applied to a single-event effect (SEE) response curve defined by high-energy accelerator SEGR test data provides reasonable on-orbit hardness assurance, although in a high-voltage power MOSFET, it did not bound the risk of failure.

  12. Modeling from Local to Subsystem Level Effects in Analog and Digital Circuits Due to Space Induced Single Event Transients

    NASA Technical Reports Server (NTRS)

    Perez, Reinaldo J.

    2011-01-01

    Single Event Transients in analog and digital electronics from space generated high energetic nuclear particles can disrupt either temporarily and sometimes permanently the functionality and performance of electronics in space vehicles. This work first provides some insights into the modeling of SET in electronic circuits that can be used in SPICE-like simulators. The work is then directed to present methodologies, one of which was developed by this author, for the assessment of SET at different levels of integration in electronics, from the circuit level to the subsystem level.

  13. Discussions On Worst-Case Test Condition For Single Event Burnout

    NASA Astrophysics Data System (ADS)

    Liu, Sandra; Zafrani, Max; Sherman, Phillip

    2011-10-01

    This paper discusses the failure characteristics of single- event burnout (SEB) on power MOSFETs based on analyzing the quasi-stationary avalanche simulation curves. The analyses show the worst-case test condition for SEB would be using the ion that has the highest mass that would result in the highest transient current due to charge deposition and displacement damage. The analyses also show it is possible to build power MOSFETs that will not exhibit SEB even when tested with the heaviest ion, which have been verified by heavy ion test data on SEB sensitive and SEB immune devices.

  14. Single Event Effects mitigation with TMRG tool

    NASA Astrophysics Data System (ADS)

    Kulis, S.

    2017-01-01

    Single Event Effects (SEE) are a major concern for integrated circuits exposed to radiation. There have been several techniques proposed to protect circuits against radiation-induced upsets. Among the others, the Triple Modular Redundancy (TMR) technique is one of the most popular. The purpose of the Triple Modular Redundancy Generator (TMRG) tool is to automatize the process of triplicating digital circuits freeing the designer from introducing the TMR code manually at the implementation stage. It helps to ensure that triplicated logic is maintained through the design process. Finally, the tool streamlines the process of introducing SEE in gate level simulations for final verification.

  15. Stochastic point-source modeling of ground motions in the Cascadia region

    USGS Publications Warehouse

    Atkinson, G.M.; Boore, D.M.

    1997-01-01

    A stochastic model is used to develop preliminary ground motion relations for the Cascadia region for rock sites. The model parameters are derived from empirical analyses of seismographic data from the Cascadia region. The model is based on a Brune point-source characterized by a stress parameter of 50 bars. The model predictions are compared to ground-motion data from the Cascadia region and to data from large earthquakes in other subduction zones. The point-source simulations match the observations from moderate events (M 100 km). The discrepancy at large magnitudes suggests further work on modeling finite-fault effects and regional attenuation is warranted. In the meantime, the preliminary equations are satisfactory for predicting motions from events of M < 7 and provide conservative estimates of motions from larger events at distances less than 100 km.

  16. Assessment of long-term knowledge retention following single-day simulation training for uncommon but critical obstetrical events

    PubMed Central

    Vadnais, Mary A.; Dodge, Laura E.; Awtrey, Christopher S.; Ricciotti, Hope A.; Golen, Toni H.; Hacker, Michele R.

    2013-01-01

    Objective The objectives were to determine (i) whether simulation training results in short-term and long-term improvement in the management of uncommon but critical obstetrical events and (ii) to determine whether there was additional benefit from annual exposure to the workshop. Methods Physicians completed a pretest to measure knowledge and confidence in the management of eclampsia, shoulder dystocia, postpartum hemorrhage and vacuum-assisted vaginal delivery. They then attended a simulation workshop and immediately completed a posttest. Residents completed the same posttests 4 and 12 months later, and attending physicians completed the posttest at 12 months. Physicians participated in the same simulation workshop 1 year later and then completed a final posttest. Scores were compared using paired t-tests. Results Physicians demonstrated improved knowledge and comfort immediately after simulation. Residents maintained this improvement at 1 year. Attending physicians remained more comfortable managing these scenarios up to 1 year later; however, knowledge retention diminished with time. Repeating the simulation after 1 year brought additional improvement to physicians. Conclusion Simulation training can result in short-term and contribute to long-term improvement in objective measures of knowledge and comfort level in managing uncommon but critical obstetrical events. Repeat exposure to simulation training after 1 year can yield additional benefits. PMID:22191668

  17. Technology, design, simulation, and evaluation for SEP-hardened circuits

    NASA Technical Reports Server (NTRS)

    Adams, J. R.; Allred, D.; Barry, M.; Rudeck, P.; Woodruff, R.; Hoekstra, J.; Gardner, H.

    1991-01-01

    This paper describes the technology, design, simulation, and evaluation for improvement of the Single Event Phenomena (SEP) hardness of gate-array and SRAM cells. Through the use of design and processing techniques, it is possible to achieve an SEP error rate less than 1.0 x 10(exp -10) errors/bit-day for a 9O percent worst-case geosynchronous orbit environment.

  18. Simulated single molecule microscopy with SMeagol.

    PubMed

    Lindén, Martin; Ćurić, Vladimir; Boucharin, Alexis; Fange, David; Elf, Johan

    2016-08-01

    SMeagol is a software tool to simulate highly realistic microscopy data based on spatial systems biology models, in order to facilitate development, validation and optimization of advanced analysis methods for live cell single molecule microscopy data. SMeagol runs on Matlab R2014 and later, and uses compiled binaries in C for reaction-diffusion simulations. Documentation, source code and binaries for Mac OS, Windows and Ubuntu Linux can be downloaded from http://smeagol.sourceforge.net johan.elf@icm.uu.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  19. Characterization and Simulation of Gunfire with Wavelets

    DOE PAGES

    Smallwood, David O.

    1999-01-01

    Gunfire is used as an example to show how the wavelet transform can be used to characterize and simulate nonstationary random events when an ensemble of events is available. The structural response to nearby firing of a high-firing rate gun has been characterized in several ways as a nonstationary random process. The current paper will explore a method to describe the nonstationary random process using a wavelet transform. The gunfire record is broken up into a sequence of transient waveforms each representing the response to the firing of a single round. A wavelet transform is performed on each of thesemore » records. The gunfire is simulated by generating realizations of records of a single-round firing by computing an inverse wavelet transform from Gaussian random coefficients with the same mean and standard deviation as those estimated from the previously analyzed gunfire record. The individual records are assembled into a realization of many rounds firing. A second-order correction of the probability density function is accomplished with a zero memory nonlinear function. The method is straightforward, easy to implement, and produces a simulated record much like the measured gunfire record.« less

  20. Wave inhibition by sea ice enables trans-Atlantic ice rafting of debris during Heinrich Events

    NASA Astrophysics Data System (ADS)

    Wagner, T. J. W.; Dell, R.; Eisenman, I.; Keeling, R. F.; Padman, L.; Severinghaus, J. P.

    2017-12-01

    The thickness of the ice-rafted debris (IRD) layers that signal Heinrich Events declines far more gradually with distance from the iceberg sources than would be expected based on present-day iceberg trajectories. Here we model icebergs as passive Lagrangian tracers driven by ocean currents, winds, and sea surface temperatures. The icebergs are released in a comprehensive climate model simulation of the last glacial maximum (LGM), as well as a simulation of the modern climate. The two simulated climates result in qualitatively similar distributions of iceberg meltwater and hence debris, with the colder temperatures of the LGM having only a relatively small effect on meltwater spread. In both scenarios, meltwater flux falls off rapidly with zonal distance from the source, in contrast with the more uniform spread of IRD in sediment cores. In order to address this discrepancy, we propose a physical mechanism that could have prolonged the lifetime of icebergs during Heinrich events. The mechanism involves a surface layer of cold and fresh meltwater formed from, and retained around, densely packed armadas of icebergs. This leads to wintertime sea ice formation even in relatively low latitudes. The sea ice in turn shields the icebergs from wave erosion, which is the main source of iceberg ablation. We find that allowing sea ice to form around all icebergs during four months each winter causes the model to approximately agree with the distribution of IRD in sediment cores.

  1. Juvenile sparrows preferentially eavesdrop on adult song interactions

    PubMed Central

    Templeton, Christopher N.; Akçay, Çağlar; Campbell, S. Elizabeth; Beecher, Michael D.

    2010-01-01

    Recent research has demonstrated that bird song learning is influenced by social factors, but so far has been unable to isolate the particular social variables central to the learning process. Here we test the hypothesis that eavesdropping on singing interactions of adults is a key social event in song learning by birds. In a field experiment, we compared the response of juvenile male song sparrows (Melospiza melodia) to simulated adult counter-singing versus simulated solo singing. We used radio telemetry to follow the movements of each focal bird and assess his response to each playback trial. Juveniles approached the playback speakers when exposed to simulated interactive singing of two song sparrows, but not when exposed to simulated solo singing of a single song sparrow, which in fact they treated similar to heterospecific singing. Although the young birds approached simulated counter-singing, neither did they approach closely, nor did they vocalize themselves, suggesting that the primary function of approach was to permit eavesdropping on these singing interactions. These results indicate that during the prime song-learning phase, juvenile song sparrows are attracted to singing interactions between adults but not to singing by a single bird and suggest that singing interactions may be particularly powerful song-tutoring events. PMID:19846461

  2. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less

  3. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE PAGES

    Romano, Paul K.; Siegel, Andrew R.

    2017-07-01

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less

  4. pyLIMA: An Open-source Package for Microlensing Modeling. I. Presentation of the Software and Analysis of Single-lens Models

    NASA Astrophysics Data System (ADS)

    Bachelet, E.; Norbury, M.; Bozza, V.; Street, R.

    2017-11-01

    Microlensing is a unique tool, capable of detecting the “cold” planets between ˜1 and 10 au from their host stars and even unbound “free-floating” planets. This regime has been poorly sampled to date owing to the limitations of alternative planet-finding methods, but a watershed in discoveries is anticipated in the near future thanks to the planned microlensing surveys of WFIRST-AFTA and Euclid's Extended Mission. Of the many challenges inherent in these missions, the modeling of microlensing events will be of primary importance, yet it is often time-consuming, complex, and perceived as a daunting barrier to participation in the field. The large scale of future survey data products will require thorough but efficient modeling software, but, unlike other areas of exoplanet research, microlensing currently lacks a publicly available, well-documented package to conduct this type of analysis. We present version 1.0 of the python Lightcurve Identification and Microlensing Analysis (pyLIMA). This software is written in Python and uses existing packages as much as possible to make it widely accessible. In this paper, we describe the overall architecture of the software and the core modules for modeling single-lens events. To verify the performance of this software, we use it to model both real data sets from events published in the literature and generated test data produced using pyLIMA's simulation module. The results demonstrate that pyLIMA is an efficient tool for microlensing modeling. We will expand pyLIMA to consider more complex phenomena in the following papers.

  5. The Influence of Preferential Flow on Pressure Propagation and Landslide Triggering of the Rocca Pitigliana Landslide

    NASA Astrophysics Data System (ADS)

    Shao, W.; Bogaard, T.; Bakker, M.; Berti, M.; Savenije, H. H. G.

    2016-12-01

    The fast pore water pressure response to rain events is an important triggering factor for slope instability. The fast pressure response may be caused by preferential flow that bypasses the soil matrix. Currently, most of the hydro-mechanical models simulate pore water pressure using a single-permeability model, which cannot quantify the effects of preferential flow on pressure propagation and landslide triggering. Previous studies showed that a model based on the linear-diffusion equation can simulate the fast pressure propagation in near-saturated landslides such as the Rocca Pitigliana landslide. In such a model, the diffusion coefficient depends on the degree of saturation, which makes it difficult to use the model for predictions. In this study, the influence of preferential flow on pressure propagation and slope stability is investigated with a 1D dual-permeability model coupled with an infinite-slope stability approach. The dual-permeability model uses two modified Darcy-Richards equations to simultaneously simulate the matrix flow and preferential flow in hillslopes. The simulated pressure head is used in an infinite-slope stability analysis to identify the influence of preferential flow on the fast pressure response and landslide triggering. The dual-permeability model simulates the height and arrival of the pressure peak reasonably well. Performance of the dual-permeability model is as good as or better than the linear-diffusion model even though the dual-permeability model is calibrated for two single pulse rain events only, while the linear-diffusion model is calibrated for each rain event separately.

  6. Distribution of breakage events in random packings of rodlike particles.

    PubMed

    Grof, Zdeněk; Štěpánek, František

    2013-07-01

    Uniaxial compaction and breakage of rodlike particle packing has been studied using a discrete element method simulation. A scaling relationship between the applied stress, the number of breakage events, and the number-mean particle length has been derived and compared with computational experiments. Based on results for a wide range of intrinsic particle strengths and initial particle lengths, it seems that a single universal relation can be used to describe the incidence of breakage events during compaction of rodlike particle layers.

  7. Extension of Characterized Source Model for Broadband Strong Ground Motion Simulations (0.1-50s) of M9 Earthquake

    NASA Astrophysics Data System (ADS)

    Asano, K.; Iwata, T.

    2014-12-01

    After the 2011 Tohoku earthquake in Japan (Mw9.0), many papers on the source model of this mega subduction earthquake have been published. From our study on the modeling of strong motion waveforms in the period 0.1-10s, four isolated strong motion generation areas (SMGAs) were identified in the area deeper than 25 km (Asano and Iwata, 2012). The locations of these SMGAs were found to correspond to the asperities of M7-class events in 1930's. However, many studies on kinematic rupture modeling using seismic, geodetic and tsunami data revealed that the existence of the large slip area from the trench to the hypocenter (e.g., Fujii et al., 2011; Koketsu et al., 2011; Shao et al., 2011; Suzuki et al., 2011). That is, the excitation of seismic wave is spatially different in long and short period ranges as is already discussed by Lay et al.(2012) and related studies. The Tohoku earthquake raised a new issue we have to solve on the relationship between the strong motion generation and the fault rupture process, and it is an important issue to advance the source modeling for future strong motion prediction. The previous our source model consists of four SMGAs, and observed ground motions in the period range 0.1-10s are explained well by this source model. We tried to extend our source model to explain the observed ground motions in wider period range with a simple assumption referring to the previous our study and the concept of the characterized source model (Irikura and Miyake, 2001, 2011). We obtained a characterized source model, which have four SMGAs in the deep part, one large slip area in the shallow part and background area with low slip. The seismic moment of this source model is equivalent to Mw9.0. The strong ground motions are simulated by the empirical Green's function method (Irikura, 1986). Though the longest period limit is restricted by the SN ratio of the EGF event (Mw~6.0) records, this new source model succeeded to reproduce the observed waveforms and Fourier amplitude spectra in the period range 0.1-50s. The location of this large slip area seems to overlap the source regions of historical events in 1793 and 1897 off Sanriku area. We think the source model for strong motion prediction of Mw9 event could be constructed by the combination of hierarchical multiple asperities or source patches related to histrorical events in this region.

  8. Simulating double-peak hydrographs from single storms over mixed-use watersheds

    Treesearch

    Yang Yang; Theodore A. Endreny; David J. Nowak

    2015-01-01

    Two-peak hydrographs after a single rain event are observed in watersheds and storms with distinct volumes contributing as fast and slow runoff. The authors developed a hydrograph model able to quantify these separate runoff volumes to help in estimation of runoff processes and residence times used by watershed managers. The model uses parallel application of two...

  9. Estimating hypothetical present-day insured losses for past intense hurricanes in the French Antilles

    NASA Astrophysics Data System (ADS)

    Thornton, James; Desarthe, Jérémy; Naulin, Jean-Philippe; Garnier, Emmanuel; Liu, Ye; Moncoulon, David

    2015-04-01

    On the islands of the French Antilles, the period for which systematic meteorological measurements and historic event loss data are available is short relative to the recurrence intervals of very intense, damaging hurricanes. Additionally, the value of property at risk changes through time. As such, the recent past can only provide limited insight into potential losses from extreme storms in coming years. Here we present some research that seeks to overcome, as far as is possible, the limitations of record length in assessing the possible impacts of near-future hurricanes on insured properties. First, using the archives of the French overseas departments (which included administrative and weather reports, inventories of damage to houses, crops and trees, as well as some meteorological observations after 1950) we reconstructed the spatial patterns of hazard intensity associated with three historical events. They are: i) the 1928 Hurricane (Guadeloupe), ii) Hurricane Betsy (1956, Guadeloupe) and iii) Hurricane David (1979, Martinique). These events were selected because all were damaging, and the information available on each is rich. Then, using a recently developed catastrophe model for hurricanes affecting Guadeloupe, Martinique, Saint-Barthélemy and Saint-Martin, we simulated the hypothetical losses to insured properties that the reconstructed events might cause if they were to reoccur today. The model simulated damage due to wind, rainfall-induced flooding and storm surge flooding. These 'what if' scenarios provided an initial indication of the potential present-day exposure of the insurance industry to intense hurricanes. However, we acknowledge that historical events are unlikely to repeat exactly. We therefore extended the study by producing a stochastic event catalogue containing a large number of synthetic but plausible hurricane events. Instrumental data were used as a basis for event generation, but importantly the statistical methods we applied permit the extrapolation of simulated events beyond the observed intensity ranges. The event catalogue enabled the model to be run in a probabilistic mode; the losses for each synthetic event in a 10,000-year period were simulated. In this way, the aleatory uncertainty associated with future hazard outcomes was addressed. In conclusion, we consider how the reconstructed event hazard intensities and losses compare with the distribution of 32,320 events in the stochastic event set. Further comparisons are made with a longer chronology of tropical cyclones in the Antilles (going back to the 17th Century) prepared solely from documentary sources. Overall, the novelty of this work lies in the integration of data sources that are frequently overlooked in catastrophe model development and evaluation.

  10. Impact of Active Control on Passive Safety Response Characteristics of Sodium-Cooled Fast Reactors: II-Model Implementation and Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ponciroli, Roberto; Passerini, Stefano; Vilim, Richard B.

    Advanced reactors are often claimed to be passively safe against unprotected upset events. In common practice, these events are not considered in the context of the plant control system, i.e., the reactor is subjected to classes of unprotected upset events while the normally programmed response of the control system is assumed not to be present. However, this approach constitutes an oversimplification since, depending on the upset involving the control system, an actuator does not necessarily go in the same direction as needed for safety. In this work, dynamic simulations are performed to assess the degree to which the inherent self-regulatingmore » plant response is safe from active control system override. The simulations are meant to characterize the resilience of the plant to unprotected initiators. The initiators were represented and modeled as an actuator going to a hard limit. Consideration of failure is further limited to individual controllers as there is no cross-connect of signals between these controllers. The potential for passive safety override by the control system is then relegated to the single-input single-output controllers. Here, the results show that when the plant control system is designed by taking into account and quantifying the impact of the plant control system on accidental scenarios there is very limited opportunity for the preprogrammed response of the control system to override passive safety protection in the event of an unprotected initiator.« less

  11. Impact of Active Control on Passive Safety Response Characteristics of Sodium-Cooled Fast Reactors: II-Model Implementation and Simulations

    DOE PAGES

    Ponciroli, Roberto; Passerini, Stefano; Vilim, Richard B.

    2017-06-21

    Advanced reactors are often claimed to be passively safe against unprotected upset events. In common practice, these events are not considered in the context of the plant control system, i.e., the reactor is subjected to classes of unprotected upset events while the normally programmed response of the control system is assumed not to be present. However, this approach constitutes an oversimplification since, depending on the upset involving the control system, an actuator does not necessarily go in the same direction as needed for safety. In this work, dynamic simulations are performed to assess the degree to which the inherent self-regulatingmore » plant response is safe from active control system override. The simulations are meant to characterize the resilience of the plant to unprotected initiators. The initiators were represented and modeled as an actuator going to a hard limit. Consideration of failure is further limited to individual controllers as there is no cross-connect of signals between these controllers. The potential for passive safety override by the control system is then relegated to the single-input single-output controllers. Here, the results show that when the plant control system is designed by taking into account and quantifying the impact of the plant control system on accidental scenarios there is very limited opportunity for the preprogrammed response of the control system to override passive safety protection in the event of an unprotected initiator.« less

  12. Statistical Properties of SEE Rate Calculation in the Limits of Large and Small Event Counts

    NASA Technical Reports Server (NTRS)

    Ladbury, Ray

    2007-01-01

    This viewgraph presentation reviews the Statistical properties of Single Event Effects (SEE) rate calculations. The goal of SEE rate calculation is to bound the SEE rate, though the question is by how much. The presentation covers: (1) Understanding errors on SEE cross sections, (2) Methodology: Maximum Likelihood and confidence Contours, (3) Tests with Simulated data and (4) Applications.

  13. JETSPIN: A specific-purpose open-source software for simulations of nanofiber electrospinning

    NASA Astrophysics Data System (ADS)

    Lauricella, Marco; Pontrelli, Giuseppe; Coluzza, Ivan; Pisignano, Dario; Succi, Sauro

    2015-12-01

    We present the open-source computer program JETSPIN, specifically designed to simulate the electrospinning process of nanofibers. Its capabilities are shown with proper reference to the underlying model, as well as a description of the relevant input variables and associated test-case simulations. The various interactions included in the electrospinning model implemented in JETSPIN are discussed in detail. The code is designed to exploit different computational architectures, from single to parallel processor workstations. This paper provides an overview of JETSPIN, focusing primarily on its structure, parallel implementations, functionality, performance, and availability.

  14. Feasibility of performing high resolution cloud-resolving simulations of historic extreme events: The San Fruttuoso (Liguria, italy) case of 1915.

    NASA Astrophysics Data System (ADS)

    Parodi, Antonio; Boni, Giorgio; Ferraris, Luca; Gallus, William; Maugeri, Maurizio; Molini, Luca; Siccardi, Franco

    2017-04-01

    Recent studies show that highly localized and persistent back-building mesoscale convective systems represent one of the most dangerous flash-flood producing storms in the north-western Mediterranean area. Substantial warming of the Mediterranean Sea in recent decades raises concerns over possible increases in frequency or intensity of these types of events as increased atmospheric temperatures generally support increases in water vapor content. Analyses of available historical records do not provide a univocal answer, since these may be likely affected by a lack of detailed observations for older events. In the present study, 20th Century Reanalysis Project initial and boundary condition data in ensemble mode are used to address the feasibility of performing cloud-resolving simulations with 1 km horizontal grid spacing of a historic extreme event that occurred over Liguria (Italy): The San Fruttuoso case of 1915. The proposed approach focuses on the ensemble Weather Research and Forecasting (WRF) model runs, as they are the ones most likely to best simulate the event. It is found that these WRF runs generally do show wind and precipitation fields that are consistent with the occurrence of highly localized and persistent back-building mesoscale convective systems, although precipitation peak amounts are underestimated. Systematic small north-westward position errors with regard to the heaviest rain and strongest convergence areas imply that the Reanalysis members may not be adequately representing the amount of cool air over the Po Plain outflowing into the Liguria Sea through the Apennines gap. Regarding the role of historical data sources, this study shows that in addition to Reanalysis products, unconventional data, such as historical meteorological bulletins, newspapers and even photographs can be very valuable sources of knowledge in the reconstruction of past extreme events.

  15. Preparing for InSight - using the continuous seismic data flow to investigate the deep interior of Mars

    NASA Astrophysics Data System (ADS)

    Hempel, S.; Garcia, R.; Weber, R. C.; Schmerr, N. C.; Panning, M. P.; Lognonne, P. H.; Banerdt, W. B.

    2016-12-01

    Complementary to investigating ray theoretically predictable parameters to explore the deep interior of Mars (see AGU contribution by R. Weber et al.), this paper presents the waveform approach to illuminate the lowermost mantle and core-mantle boundary of Mars. In preparation to the NASA discovery mission InSight, scheduled for launch in May, 2018, we produce synthetic waveforms considering realistic combinations of sources and a single receiver, as well as noise models. Due to a lack of constraints on the scattering properties of the Martian crust and mantle, we assume Earth-like scattering as a minimum and Moon-like scattering as a maximum possibility. Various seismic attenuation models are also investigated. InSight is set up to deliver event data as well as a continuous data flow. Where ray theoretical approaches will investigate the event data, the continuous data flow may contain signals reflected multiple times off the same reflector, e.g. the underside of the lithosphere, or the core-mantle boundary. It may also contain signals of individual events not detected or interfering wavefields radiated off multiple undetected events creating 'seismic noise'. We will use AxiSEM to simulate a continuous data flow for these cases for various 1D and 2D Mars models, and explore the possibilities of seismic interferometry to use seismic information hidden in the coda to investigate the deep interior of Mars.

  16. Warp-averaging event-related potentials.

    PubMed

    Wang, K; Begleiter, H; Porjesz, B

    2001-10-01

    To align the repeated single trials of the event-related potential (ERP) in order to get an improved estimate of the ERP. A new implementation of the dynamic time warping is applied to compute a warp-average of the single trials. The trilinear modeling method is applied to filter the single trials prior to alignment. Alignment is based on normalized signals and their estimated derivatives. These features reduce the misalignment due to aligning the random alpha waves, explaining amplitude differences in latency differences, or the seemingly small amplitudes of some components. Simulations and applications to visually evoked potentials show significant improvement over some commonly used methods. The new implementation of the dynamic time warping can be used to align the major components (P1, N1, P2, N2, P3) of the repeated single trials. The average of the aligned single trials is an improved estimate of the ERP. This could lead to more accurate results in subsequent analysis.

  17. Iceberg calving as a primary source of regional‐scale glacier‐generated seismicity in the St. Elias Mountains, Alaska

    USGS Publications Warehouse

    O'Neel, Shad; Larsen, Christopher F.; Rupert, Natalia; Hansen, Roger

    2010-01-01

    Since the installation of the Alaska Regional Seismic Network in the 1970s, data analysts have noted nontectonic seismic events thought to be related to glacier dynamics. While loose associations with the glaciers of the St. Elias Mountains have been made, no detailed study of the source locations has been undertaken. We performed a two-step investigation surrounding these events, beginning with manual locations that guided an automated detection and event sifting routine. Results from the manual investigation highlight characteristics of the seismic waveforms including single-peaked (narrowband) spectra, emergent onsets, lack of distinct phase arrivals, and a predominant cluster of locations near the calving termini of several neighboring tidewater glaciers. Through these locations, comparison with previous work, analyses of waveform characteristics, frequency-magnitude statistics and temporal patterns in seismicity, we suggest calving as a source for the seismicity. Statistical properties and time series analysis of the event catalog suggest a scale-invariant process that has no single or simple forcing. These results support the idea that calving is often a response to short-lived or localized stress perturbations. Our results demonstrate the utility of passive seismic instrumentation to monitor relative changes in the rate and magnitude of iceberg calving at tidewater glaciers that may be volatile or susceptible to ensuing rapid retreat, especially when existing seismic infrastructure can be used.

  18. Energy-resolved fast neutron resonance radiography at CSNS

    NASA Astrophysics Data System (ADS)

    Tan, Zhixin; Tang, Jingyu; Jing, Hantao; Fan, Ruirui; Li, Qiang; Ning, Changjun; Bao, Jie; Ruan, Xichao; Luan, Guangyuan; Feng, Changqin; Zhang, Xianpeng

    2018-05-01

    The white neutron beamline at the China Spallation Neutron Source will be used mainly for nuclear data measurements. It will be characterized by high flux and broad energy spectra. To exploit the beamline as a neutron imaging source, we propose a liquid scintillator fiber array for fast neutron resonance radiography. The fiber detector unit has a small exposed area, which will limit the event counts and separate the events in time, thus satisfying the requirements for single-event time-of-flight (SEToF) measurement. The current study addresses the physical design criteria for ToF measurement, including flux estimation and detector response. Future development and potential application of the technology are also discussed.

  19. Reconstructing the Aliso Canyon natural gas leak incident

    NASA Astrophysics Data System (ADS)

    Duren, R. M.; Yadav, V.; Verhulst, K. R.; Thorpe, A. K.; Hopkins, F. M.; Prasad, K.; Kuai, L.; Thompson, D. R.; Wong, C.; Sander, S. P.; Mueller, K. L.; Nehrkorn, T.; Lee, M.; Hulley, G. C.; Johnson, W. R.; Aubrey, A. D.; Whetstone, J. R.; Miller, C. E.

    2016-12-01

    Natural gas is a key energy source and presents significant policy challenges including energy reliability and the potential for fugitive methane emissions. The well blowout reported in October 2015 at the Aliso Canyon underground gas storage facility near Porter Ranch, California and subsequent uncontrolled venting was the largest single anthropogenic methane source known to date. Multiple independent estimates indicate that this super-emitter source rivaled the normal methane flux of the entire South Coast Air Basin (SoCAB) for several months until the well was plugged. The complexity of the event and logistical challenges - particularly in the initial weeks - presented significant barriers to estimating methane losses. Additionally, accounting for total gas lost is necessary but not sufficient for understanding the sequence of events and the controlling physical processes. We used a tiered system of observations to assess methane emissions from the Aliso Canyon incident. To generate a complete flux time-series, we applied tracer-transport models and tracer-tracer techniques to persistent, multi-year atmospheric methane observations from a network of surface in-situ and remote-sensing instruments. To study the fine spatio-temporal structure of methane plumes and understand the changing source morphology, we conducted intensive mobile surface campaigns, deployed airborne imaging spectrometers, requested special observations from two satellites, and employed large eddy simulations. Through a synthesis analysis we assessed methane fluxes from Aliso Canyon before, during and after the reported incident. We compared our fine scale spatial data with bottom-up data and reports of activity at the facility to better understand the controlling processes. We coordinated with California stakeholder agencies to validate and interpret these results and to consider the potential broader implications on underground gas storage and future priorities for methane monitoring.

  20. Laboratory generated M -6 earthquakes

    USGS Publications Warehouse

    McLaskey, Gregory C.; Kilgore, Brian D.; Lockner, David A.; Beeler, Nicholas M.

    2014-01-01

    We consider whether mm-scale earthquake-like seismic events generated in laboratory experiments are consistent with our understanding of the physics of larger earthquakes. This work focuses on a population of 48 very small shocks that are foreshocks and aftershocks of stick–slip events occurring on a 2.0 m by 0.4 m simulated strike-slip fault cut through a large granite sample. Unlike the larger stick–slip events that rupture the entirety of the simulated fault, the small foreshocks and aftershocks are contained events whose properties are controlled by the rigidity of the surrounding granite blocks rather than characteristics of the experimental apparatus. The large size of the experimental apparatus, high fidelity sensors, rigorous treatment of wave propagation effects, and in situ system calibration separates this study from traditional acoustic emission analyses and allows these sources to be studied with as much rigor as larger natural earthquakes. The tiny events have short (3–6 μs) rise times and are well modeled by simple double couple focal mechanisms that are consistent with left-lateral slip occurring on a mm-scale patch of the precut fault surface. The repeatability of the experiments indicates that they are the result of frictional processes on the simulated fault surface rather than grain crushing or fracture of fresh rock. Our waveform analysis shows no significant differences (other than size) between the M -7 to M -5.5 earthquakes reported here and larger natural earthquakes. Their source characteristics such as stress drop (1–10 MPa) appear to be entirely consistent with earthquake scaling laws derived for larger earthquakes.

  1. Dynamic processes in heavy-ion collisions at intermediate energies

    NASA Astrophysics Data System (ADS)

    Prendergast, E. P.

    1999-03-01

    This thesis describes the study of the reaction dynamics in heavy-ion collisions of small nuclear systems at intermediate energies. For this, experiments were performed of 24Mg+27A1 at 45 and 95 AMeV. The experiments described in this thesis were performed at the GANIL accelerator facility in Caeri (France) using the Huygens detectors in conjunction with the ‘MUR’. The Huygens detectors consist of the CsI(Tl)-Wall (CIW) covering the backward hemisphere and, located at mid-rapidity, the central trigger detector (CTD), a gas chamber with microstrip read-out backed by 48 plastic scintillators. The forward region is covered by 16 of the plastic scintillators of the CTD and by the MUR, a time-of-flight wall consisting of 96 plastic scintillator sheets. In earlier experiments only fragments with atomic number, Z, greater then two could be identifled in the CTD. Therefore, an investigation was done into the properties of different drift gases. The use of freon (CF4) in the drift chamber, combined with an increase of the gas pressure to 150 mbar, makes it possible to identify all particles with Z ≥ 2. Under these conditions particles with Z = 1 can only be identifled to approximately 25 AMeV. The Isospin Quantum Molecular Dynamics (IQMD) model has been used, to interpret the measured data. This model gives a microscopical description of heavy-ion collisions and simulates collisions on an event by event basis. In IQMD all protons and neutrons are represented as individual Gaussian wave packets. After initialisation the path of each nucleon is calculated for 200 fm/c, after which the simulation is stopped. At this time, nucleons which are close in space are clustered into fragments. The events generated by IQMD can then be processed by a GEANT detector simulation. This calculation takes into account the effects of the detector on the incoming particles. By using the GEANT simulation it is possible to give a direct comparison between the results of IQMD and the experimental data. The impact-parameter selection procedure, based on the charged-particle multiplicity, was studied using IQMD events and the GEANT detector simulation. This showed that indeed an impact-parameter selection can be made with this method. However, the accuracy of this selection for these small systems is not very good. In particular the central-event selection is heavily polluted by mid-central events. Only mid-central events have been studied for 24Mg+27A1 at 45 and 95 AMeV. In order to study the collective flow in heavy-ion collisions, first the event plane has to be reconstructed. Again IQMD events and the GEANT detector simulation were used to investigate the effectiveness of several different event-plane reconstruction methods. It was found that an event plane can be reconstructed. The azimuthal-correlation method gives marginally the best result. With this method to reconstruct the reaction plane, the directed in-plane fiow was studied. The experimental data showed a strongly reduced flow at 95 AMeV compared to 45 AMeV, in accordance with a balancing energy of 114 ± 10 AMeV as derived from literature. Finally, the reaction dynamics were studied using the azimuthal correlations and the polar-angle distributions of intermediate-mass fragments (IMFs) emitted at midrapidity, both of which do not require an event-plane reconstructioh. The azimuthal correlations for the two energies are quite similar, whereas the directed in-plane flow is substantially higher at 45 AMeV than at 95 AMeV. This shows that the azimuthal correlations are insensitive to the magnitude of the directed in-plane flow. At both energies, the azimuthal-correlation functions for the various IMFs show absolute maxima at 180°, which can not be explained by a mid-rapidity source emitting fragments mdependently. However, the distributions are described by IQMD. The maxima are either caused target-projectile correlations (as in IQMD) or by momentum conservation. To describe the momentum-conservation scenario, a second model was introduced, which simulates the prompt multifragmentation of a small source. This model was fitted to the measured azimuthal-correlation functions, resulting in source sizes between 32 and 40 amu, depending on the mass of the emitted IMFs. Subsequently, the polar-angle distributions of the two models were compared to the experimental data. The distributions of the experimental data showed target- and projectile-like maxima, which can not be described by a decaying source, but are described by IQMD. Therefore, it is concluded that the IMF production in these small systems is a dynamic process with no evidence of a mid-rapidity source.

  2. Integration of rainfall/runoff and geomorphological analyses flood hazard in small catchments: case studies from the southern Apennines (Italy)

    NASA Astrophysics Data System (ADS)

    Palumbo, Manuela; Ascione, Alessandra; Santangelo, Nicoletta; Santo, Antonio

    2017-04-01

    We present the first results of an analysis of flood hazard in ungauged mountain catchments that are associated with intensely urbanized alluvial fans. Assessment of hydrological hazard has been based on the integration of rainfall/runoff modelling of drainage basins with geomorphological analysis and mapping. Some small and steep, ungauged mountain catchments located in various areas of the southern Apennines, in southern Italy, have been chosen as test sites. In the last centuries, the selected basins have been subject to heavy and intense precipitation events, which have caused flash floods with serious damages in the correlated alluvial fan areas. Available spatial information (regional technical maps, DEMs, land use maps, geological/lithological maps, orthophotos) and an automated GIS-based procedure (ArcGis tools and ArcHydro tools) have been used to extract morphological, hydrological and hydraulic parameters. Such parameters have been used to run the HEC (Hydrologic Engineering Center of the US Army Corps of Engineers) software (GeoHMS, GeoRAS, HMS and RAS) based on rainfall-runoff models, which have allowed the hydrological and hydraulic simulations. As the floods occurred in the studied catchments have been debris flows dominated, the solid load simulation has been also performed. In order to validate the simulations, we have compared results of the modelling with the effects produced by past floods. Such effects have been quantified through estimations of both the sediment volumes within each catchment that have the potential to be mobilised (pre-event) during a sediment transfer event, and the volume of sediments delivered by the debris flows at basins' outlets (post-event). The post-event sediment volume has been quantified through post-event surveys and Lidar data. Evaluation of the pre-event sediment volumes in single catchments has been based on mapping of sediment storages that may constitute source zones of bed load transport and debris flows. For such an approach has been used a methodology that consists of the application of a process-based geomorphological mapping, based on data derived from GIS analysis using high-resolution DEMs, field measurements and aerial photograph interpretations. Our integrated approach, which allows quantification of the flow rate and a semi-quantitative assessment of sediment that can be mobilized during hydro-meteorological events, is applied for the first time to torrential catchmenmts of the southern Apennines and may significantly contribute to previsional studies aimed at risk mitigation in the study region.

  3. Passive longitudes of solar cosmic rays in 19-24 solar cycles

    NASA Astrophysics Data System (ADS)

    Getselev, Igor; Podzolko, Mikhail; Shatov, Pavel; Tasenko, Sergey; Skorohodov, Ilya; Okhlopkov, Viktor

    The distribution of solar proton event sources along the Carrington longitude in 19-24 solar cycles is considered. For this study an extensive database on ≈450 solar proton events have been constructed using various available sources and solar cosmic ray measurements, which included the data about the time of the event, fluences of protons of various energies in it and the coordinates of its source on the Sun. The analysis has shown the significant inhomogeneity of the distribution. In particular a region of “passive longitudes” has been discovered, extensive over the longitude (from ≈90-100° to 170°) and the life time (the whole period of observations). From the 60 most powerful proton events during the 19-24 solar cycles not more than 1 event was originated from the interval of 100-170° Carrington longitude, from another 80 “medium” events only 10 were injected from this interval. The summarized proton fluence of the events, which sources belong to the interval of 90-170° amounts only to 5%, and if not take into account the single “anomalous” powerful event - to just only 1.2% from the total fluence for all the considered events. The existence of the extensive and stable interval of “passive” Carrington longitudes is the remarkable phenomenon in solar physics. It also confirms the physical relevance of the mean synodic period of Sun’s rotation determined by R. C. Carrington.

  4. Room temperature solid-state quantum emitters in the telecom range.

    PubMed

    Zhou, Yu; Wang, Ziyu; Rasmita, Abdullah; Kim, Sejeong; Berhane, Amanuel; Bodrog, Zoltán; Adamo, Giorgio; Gali, Adam; Aharonovich, Igor; Gao, Wei-Bo

    2018-03-01

    On-demand, single-photon emitters (SPEs) play a key role across a broad range of quantum technologies. In quantum networks and quantum key distribution protocols, where photons are used as flying qubits, telecom wavelength operation is preferred because of the reduced fiber loss. However, despite the tremendous efforts to develop various triggered SPE platforms, a robust source of triggered SPEs operating at room temperature and the telecom wavelength is still missing. We report a triggered, optically stable, room temperature solid-state SPE operating at telecom wavelengths. The emitters exhibit high photon purity (~5% multiphoton events) and a record-high brightness of ~1.5 MHz. The emission is attributed to localized defects in a gallium nitride (GaN) crystal. The high-performance SPEs embedded in a technologically mature semiconductor are promising for on-chip quantum simulators and practical quantum communication technologies.

  5. Simulation study on single event burnout in linear doping buffer layer engineered power VDMOSFET

    NASA Astrophysics Data System (ADS)

    Yunpeng, Jia; Hongyuan, Su; Rui, Jin; Dongqing, Hu; Yu, Wu

    2016-02-01

    The addition of a buffer layer can improve the device's secondary breakdown voltage, thus, improving the single event burnout (SEB) threshold voltage. In this paper, an N type linear doping buffer layer is proposed. According to quasi-stationary avalanche simulation and heavy ion beam simulation, the results show that an optimized linear doping buffer layer is critical. As SEB is induced by heavy ions impacting, the electric field of an optimized linear doping buffer device is much lower than that with an optimized constant doping buffer layer at a given buffer layer thickness and the same biasing voltages. Secondary breakdown voltage and the parasitic bipolar turn-on current are much higher than those with the optimized constant doping buffer layer. So the linear buffer layer is more advantageous to improving the device's SEB performance. Project supported by the National Natural Science Foundation of China (No. 61176071), the Doctoral Fund of Ministry of Education of China (No. 20111103120016), and the Science and Technology Program of State Grid Corporation of China (No. SGRI-WD-71-13-006).

  6. Site correction of stochastic simulation in southwestern Taiwan

    NASA Astrophysics Data System (ADS)

    Lun Huang, Cong; Wen, Kuo Liang; Huang, Jyun Yan

    2014-05-01

    Peak ground acceleration (PGA) of a disastrous earthquake, is concerned both in civil engineering and seismology study. Presently, the ground motion prediction equation is widely used for PGA estimation study by engineers. However, the local site effect is another important factor participates in strong motion prediction. For example, in 1985 the Mexico City, 400km far from the epicenter, suffered massive damage due to the seismic wave amplification from the local alluvial layers. (Anderson et al., 1986) In past studies, the use of stochastic method had been done and showed well performance on the simulation of ground-motion at rock site (Beresnev and Atkinson, 1998a ; Roumelioti and Beresnev, 2003). In this study, the site correction was conducted by the empirical transfer function compared with the rock site response from stochastic point-source (Boore, 2005) and finite-fault (Boore, 2009) methods. The error between the simulated and observed Fourier spectrum and PGA are calculated. Further we compared the estimated PGA to the result calculated from ground motion prediction equation. The earthquake data used in this study is recorded by Taiwan Strong Motion Instrumentation Program (TSMIP) from 1991 to 2012; the study area is located at south-western Taiwan. The empirical transfer function was generated by calculating the spectrum ratio between alluvial site and rock site (Borcheret, 1970). Due to the lack of reference rock site station in this area, the rock site ground motion was generated through stochastic point-source model instead. Several target events were then chosen for stochastic point-source simulating to the halfspace. Then, the empirical transfer function for each station was multiplied to the simulated halfspace response. Finally, we focused on two target events: the 1999 Chi-Chi earthquake (Mw=7.6) and the 2010 Jiashian earthquake (Mw=6.4). Considering the large event may contain with complex rupture mechanism, the asperity and delay time for each sub-fault is to be concerned. Both the stochastic point-source and the finite-fault model were used to check the result of our correction.

  7. Regional source identification using Lagrangian stochastic particle dispersion and HYSPLIT backward-trajectory models.

    PubMed

    Koracin, Darko; Vellore, Ramesh; Lowenthal, Douglas H; Watson, John G; Koracin, Julide; McCord, Travis; DuBois, David W; Chen, L W Antony; Kumar, Naresh; Knipping, Eladio M; Wheeler, Neil J M; Craig, Kenneth; Reid, Stephen

    2011-06-01

    The main objective of this study was to investigate the capabilities of the receptor-oriented inverse mode Lagrangian Stochastic Particle Dispersion Model (LSPDM) with the 12-km resolution Mesoscale Model 5 (MM5) wind field input for the assessment of source identification from seven regions impacting two receptors located in the eastern United States. The LSPDM analysis was compared with a standard version of the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) single-particle backward-trajectory analysis using inputs from MM5 and the Eta Data Assimilation System (EDAS) with horizontal grid resolutions of 12 and 80 km, respectively. The analysis included four 7-day summertime events in 2002; residence times in the modeling domain were computed from the inverse LSPDM runs and HYPSLIT-simulated backward trajectories started from receptor-source heights of 100, 500, 1000, 1500, and 3000 m. Statistics were derived using normalized values of LSPDM- and HYSPLIT-predicted residence times versus Community Multiscale Air Quality model-predicted sulfate concentrations used as baseline information. From 40 cases considered, the LSPDM identified first- and second-ranked emission region influences in 37 cases, whereas HYSPLIT-MM5 (HYSPLIT-EDAS) identified the sources in 21 (16) cases. The LSPDM produced a higher overall correlation coefficient (0.89) compared with HYSPLIT (0.55-0.62). The improvement of using the LSPDM is also seen in the overall normalized root mean square error values of 0.17 for LSPDM compared with 0.30-0.32 for HYSPLIT. The HYSPLIT backward trajectories generally tend to underestimate near-receptor sources because of a lack of stochastic dispersion of the backward trajectories and to overestimate distant sources because of a lack of treatment of dispersion. Additionally, the HYSPLIT backward trajectories showed a lack of consistency in the results obtained from different single vertical levels for starting the backward trajectories. To alleviate problems due to selection of a backward-trajectory starting level within a large complex set of 3-dimensional winds, turbulence, and dispersion, results were averaged from all heights, which yielded uniform improvement against all individual cases.

  8. Measurement and Simulation of the Variation in Proton-Induced Energy Deposition in Large Silicon Diode Arrays

    NASA Technical Reports Server (NTRS)

    Howe, Christina L.; Weller, Robert A.; Reed, Robert A.; Sierawski, Brian D.; Marshall, Paul W.; Marshall, Cheryl J.; Mendenhall, Marcus H.; Schrimpf, Ronald D.

    2007-01-01

    The proton induced charge deposition in a well characterized silicon P-i-N focal plane array is analyzed with Monte Carlo based simulations. These simulations include all physical processes, together with pile up, to accurately describe the experimental data. Simulation results reveal important high energy events not easily detected through experiment due to low statistics. The effects of each physical mechanism on the device response is shown for a single proton energy as well as a full proton space flux.

  9. Analysis of the French insurance market exposure to floods: a stochastic model combining river overflow and surface runoff

    NASA Astrophysics Data System (ADS)

    Moncoulon, D.; Labat, D.; Ardon, J.; Leblois, E.; Onfroy, T.; Poulard, C.; Aji, S.; Rémy, A.; Quantin, A.

    2014-09-01

    The analysis of flood exposure at a national scale for the French insurance market must combine the generation of a probabilistic event set of all possible (but which have not yet occurred) flood situations with hazard and damage modeling. In this study, hazard and damage models are calibrated on a 1995-2010 historical event set, both for hazard results (river flow, flooded areas) and loss estimations. Thus, uncertainties in the deterministic estimation of a single event loss are known before simulating a probabilistic event set. To take into account at least 90 % of the insured flood losses, the probabilistic event set must combine the river overflow (small and large catchments) with the surface runoff, due to heavy rainfall, on the slopes of the watershed. Indeed, internal studies of the CCR (Caisse Centrale de Reassurance) claim database have shown that approximately 45 % of the insured flood losses are located inside the floodplains and 45 % outside. Another 10 % is due to sea surge floods and groundwater rise. In this approach, two independent probabilistic methods are combined to create a single flood loss distribution: a generation of fictive river flows based on the historical records of the river gauge network and a generation of fictive rain fields on small catchments, calibrated on the 1958-2010 Météo-France rain database SAFRAN. All the events in the probabilistic event sets are simulated with the deterministic model. This hazard and damage distribution is used to simulate the flood losses at the national scale for an insurance company (Macif) and to generate flood areas associated with hazard return periods. The flood maps concern river overflow and surface water runoff. Validation of these maps is conducted by comparison with the address located claim data on a small catchment (downstream Argens).

  10. GPU Acceleration of Mean Free Path Based Kernel Density Estimators for Monte Carlo Neutronics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.

    Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics formore » one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.« less

  11. Why continuous simulation? The role of antecedent moisture in design flood estimation

    NASA Astrophysics Data System (ADS)

    Pathiraja, S.; Westra, S.; Sharma, A.

    2012-06-01

    Continuous simulation for design flood estimation is increasingly becoming a viable alternative to traditional event-based methods. The advantage of continuous simulation approaches is that the catchment moisture state prior to the flood-producing rainfall event is implicitly incorporated within the modeling framework, provided the model has been calibrated and validated to produce reasonable simulations. This contrasts with event-based models in which both information about the expected sequence of rainfall and evaporation preceding the flood-producing rainfall event, as well as catchment storage and infiltration properties, are commonly pooled together into a single set of "loss" parameters which require adjustment through the process of calibration. To identify the importance of accounting for antecedent moisture in flood modeling, this paper uses a continuous rainfall-runoff model calibrated to 45 catchments in the Murray-Darling Basin in Australia. Flood peaks derived using the historical daily rainfall record are compared with those derived using resampled daily rainfall, for which the sequencing of wet and dry days preceding the heavy rainfall event is removed. The analysis shows that there is a consistent underestimation of the design flood events when antecedent moisture is not properly simulated, which can be as much as 30% when only 1 or 2 days of antecedent rainfall are considered, compared to 5% when this is extended to 60 days of prior rainfall. These results show that, in general, it is necessary to consider both short-term memory in rainfall associated with synoptic scale dependence, as well as longer-term memory at seasonal or longer time scale variability in order to obtain accurate design flood estimates.

  12. Intercomparison of Meteorological Forcing Data from Empirical and Mesoscale Model Sources in the N.F. American River Basin in northern California

    NASA Astrophysics Data System (ADS)

    Wayand, N. E.; Hamlet, A. F.; Hughes, M. R.; Feld, S.; Lundquist, J. D.

    2012-12-01

    The data required to drive distributed hydrological models is significantly limited within mountainous terrain due to a scarcity of observations. This study evaluated three common configurations of forcing data: a) one low-elevation station, combined with empirical techniques, b) gridded output from the Weather Research and Forecasting (WRF) model, and c) a combination of the two. Each configuration was evaluated within the heavily-instrumented North Fork American River Basin in northern California, during October-June 2000-2010. Simulations of streamflow and snowpack using the Distributed Hydrology Soil and Vegetation Model (DHSVM) highlighted precipitation and radiation as variables whose sources resulted in significant differences. The best source of precipitation data varied between years. On average, the performance of WRF and the single station distributed using the Parameter Regression on Independent Slopes Model (PRISM), were not significantly different. The average percent biases in simulated streamflow were 3.4% and 0.9%, for configurations a) and b) respectively, even though precipitation compared directly with gauge measurements was biased high by 6% and 17%, suggesting that gauge undercatch may explain part of the bias. Simulations of snowpack using empirically-estimated long-wave irradiance resulted in melt rates lower than those observed at high-elevation sites, while at lower-elevations the same forcing caused significant mid-winter melt that was not observed (Figure 1). These results highlight the complexity of how forcing data sources impact hydrology over different areas (high vs. low elevation snow) and different time-periods. Overall, results support the use of output from the WRF model over empirical techniques in regions with limited station data. FIG. 1. (a,b) Simulated SWE from DHSVM compared to observations at the Sierra Snow Lab (2100m) and Blue Canyon (1609m) during 2008 - 2009. Modeled (c,d) internal pack temperature, (e,f) downward short-wave irradiance, (g,h) downward long-wave irradiance, and (i,k) net-irradiance. Note that the timeperiod of plots e,g,i focus on the melt season (March-May), and plots f,h,j focus on the erroneous mid-winter melt event during January - time-periods marked with vertical dashed lines in (a) and (b).

  13. Discovery of a bright microlensing event with planetary features towards the Taurus region: a super-Earth planet

    NASA Astrophysics Data System (ADS)

    Nucita, A. A.; Licchelli, D.; De Paolis, F.; Ingrosso, G.; Strafella, F.; Katysheva, N.; Shugarov, S.

    2018-05-01

    The transient event labelled as TCP J05074264+2447555 recently discovered towards the Taurus region was quickly recognized to be an ongoing microlensing event on a source located at distance of only 700-800 pc from Earth. Here, we show that observations with high sampling rate close to the time of maximum magnification revealed features that imply the presence of a binary lens system with very low-mass ratio components. We present a complete description of the binary lens system, which host an Earth-like planet with most likely mass of 9.2 ± 6.6 M⊕. Furthermore, the source estimated location and detailed Monte Carlo simulations allowed us to classify the event as due to the closest lens system, being at a distance of ≃380 pc and mass ≃0.25 M⊙.

  14. Soil CO2 Fluxes Following Wetting Events: Field Observations and Modeling

    NASA Astrophysics Data System (ADS)

    O'Donnell, F. C.; Caylor, K. K.

    2009-12-01

    Carbon exchange data from eddy flux towers in drylands suggest that the Birch Effect, a pulse of soil CO2 efflux triggered by the first rain following a dry period, may contribute significantly to the annual carbon budget of these ecosystems. Laboratory experiments on dryland soils have shown that microbes adapted to live in arid ecosystems may be able to remain dormant in dry soil for much longer than expected and an osmotic shock response to sudden increases in soil water potential may play a role in the Birch Effect. However, little has been done to understand how a dry soil profile responds to a rainfall event. We measured soil CO2 production during experimental wetting events in treatment plots at a site on the Botswana portion of the Kalahari Transect (KT). We buried small, solid-state sensors that continuously measure CO2 concentration in the soil air space at four depths and the soil surface and applied wetting treatments intended to simulate typical rainfall for the region to the plots, including single 10 mm wettings (the mean storm depth for the KT), single 20 mm wettings, and repeated 10 mm wettings. We solved a finite difference approximation of the governing equation for CO2 in the soil airspace to determine the source rate of CO2 during and after the wetting treatments, using Richard’s equation to approximate the change in air-filled porosity due to infiltrating water. The wetting treatments induced a rapid spike in the source rate of CO2 in the soil, the timing and magnitude of which were consistent with laboratory experiments that observed a microbial osmotic shock response. The source rate averaged over the first three hours after wetting showed that a 20 mm wetting produced a larger response than the 10 mm wettings. It also showed that a second wetting event produced a smaller response than the first and though it was not significant, an upward trend in response was apparent through the two month period. These results suggest that there may be a build-up of labile carbon in the soil during dry periods that becomes available for respiration when the soil is wetted, a hypothesis about the Birch effect that has received little attention in lab studies. Future work in this area will investigate whether or not this explanation is feasible by using glucose addition experiments to determine if the magnitude of the observed respiration pulse is affected by substrate ability.

  15. Large Subduction Earthquake Simulations using Finite Source Modeling and the Offshore-Onshore Ambient Seismic Field

    NASA Astrophysics Data System (ADS)

    Viens, L.; Miyake, H.; Koketsu, K.

    2016-12-01

    Large subduction earthquakes have the potential to generate strong long-period ground motions. The ambient seismic field, also called seismic noise, contains information about the elastic response of the Earth between two seismic stations that can be retrieved using seismic interferometry. The DONET1 network, which is composed of 20 offshore stations, has been deployed atop the Nankai subduction zone, Japan, to continuously monitor the seismotectonic activity in this highly seismically active region. The surrounding onshore area is covered by hundreds of seismic stations, which are operated the National Research Institute for Earth Science and Disaster Prevention (NIED) and the Japan Meteorological Agency (JMA), with a spacing of 15-20 km. We retrieve offshore-onshore Green's functions from the ambient seismic field using the deconvolution technique and use them to simulate the long-period ground motions of moderate subduction earthquakes that occurred at shallow depth. We extend the point source method, which is appropriate for moderate events, to finite source modeling to simulate the long-period ground motions of large Mw 7 class earthquake scenarios. The source models are constructed using scaling relations between moderate and large earthquakes to discretize the fault plane of the large hypothetical events into subfaults. Offshore-onshore Green's functions are spatially interpolated over the fault plane to obtain one Green's function for each subfault. The interpolated Green's functions are finally summed up considering different rupture velocities. Results show that this technique can provide additional information about earthquake ground motions that can be used with the existing physics-based simulations to improve seismic hazard assessment.

  16. Tsunami geology in paleoseismology

    USGS Publications Warehouse

    Yuichi Nishimura,; Jaffe, Bruce E.

    2015-01-01

    The 2004 Indian Ocean and 2011 Tohoku-oki disasters dramatically demonstrated the destructiveness and deadliness of tsunamis. For the assessment of future risk posed by tsunamis it is necessary to understand past tsunami events. Recent work on tsunami deposits has provided new information on paleotsunami events, including their recurrence interval and the size of the tsunamis (e.g. [187–189]). Tsunamis are observed not only on the margin of oceans but also in lakes. The majority of tsunamis are generated by earthquakes, but other events that displace water such as landslides and volcanic eruptions can also generate tsunamis. These non-earthquake tsunamis occur less frequently than earthquake tsunamis; it is, therefore, very important to find and study geologic evidence for past eruption and submarine landslide triggered tsunami events, as their rare occurrence may lead to risks being underestimated. Geologic investigations of tsunamis have historically relied on earthquake geology. Geophysicists estimate the parameters of vertical coseismic displacement that tsunami modelers use as a tsunami's initial condition. The modelers then let the simulated tsunami run ashore. This approach suffers from the relationship between the earthquake and seafloor displacement, the pertinent parameter in tsunami generation, being equivocal. In recent years, geologic investigations of tsunamis have added sedimentology and micropaleontology, which focus on identifying and interpreting depositional and erosional features of tsunamis. For example, coastal sediment may contain deposits that provide important information on past tsunami events [190, 191]. In some cases, a tsunami is recorded by a single sand layer. Elsewhere, tsunami deposits can consist of complex layers of mud, sand, and boulders, containing abundant stratigraphic evidence for sediment reworking and redeposition. These onshore sediments are geologic evidence for tsunamis and are called ‘tsunami deposits’ (Figs. 26 and 27). Tsunami deposits can be classified into two groups: modern tsunami deposits and paleotsunami deposits. A modern tsunami deposit is a deposit whose source event is known. A paleotsunami deposit is a deposit whose age is estimated and has a source that is either inferred to be a historical event or is unknown.

  17. Perspectives on individual to ensembles of ambient fine and ultrafine particles and their sources

    NASA Astrophysics Data System (ADS)

    Bein, Keith James

    By combining Rapid Single-ultrafine-particle Mass Spectrometry (RSMS) measurements during the Pittsburgh Supersite experiment with a large array of concurrent PM, gas and meteorological data, a synthesis of data and analyses is employed to characterize sources, emission trends and dynamics of ambient fine and ultrafine particles. Combinatorial analyses elicit individual to ensemble descriptions of particles, their sources, their changes in state from atmospheric processing and the scales of motion driving their transport and dynamics. Major results include (1) Particle size and composition are strong indicators of sources/source categories and real-time measurements allow source attribution at the single particle and point source level. (2) Single particle source attribution compares well to factor analysis of chemically-speciated bulk phase data and both resulted in similar conclusions but independently revealed new sources. (3) RSMS data can quantitatively estimate composition-resolved, number-based particle size distribution. Comparison to mass-based data yielded new information about physical and chemical properties of particles and instrument sensitivity. (4) Source-specific signatures and real-time monitoring allow passing plumes to be tracked and characterized. (5) The largest of three identified coal combustion sources emits ˜ 2.4 x 10 17 primary submicron particles per second. (6) Long-range transport has a significant impact on the eastern U.S. including specific influences of eight separate wildfire events. (7) Pollutant dynamics in the Pittsburgh summertime air shed, and Northeastern U.S., is characterized by alternating periods of stagnation and cleansing. The eight wildfire events were detected in between seven successive stagnation events. (8) Connections exist between boreal fire activity, southeast subsiding transport of the emissions, alternating periods of stagnation and cleansing at the receptor and the structure and propagation of extratropical waves. (9) Wildfire emissions can severely impact preexisting pollutant concentrations and physical and chemical processes at the receptor. (10) High-severity crown fires in boreal Canada emit ˜ 1.2 x 1015 particles/kg biomass burned. (11) In 1998, wildfire activity in the circumpolar boreal forest emitted ˜ 8 x 1026 particles, representing ˜ 14% of global wildland fire emissions. Results and conclusions address future scientific objectives in understanding effects of particles on human health and global climate change.

  18. Radiation Effects in Advanced Multiple Gate and Silicon-on-Insulator Transistors

    NASA Astrophysics Data System (ADS)

    Simoen, Eddy; Gaillardin, Marc; Paillet, Philippe; Reed, Robert A.; Schrimpf, Ron D.; Alles, Michael L.; El-Mamouni, Farah; Fleetwood, Daniel M.; Griffoni, Alessio; Claeys, Cor

    2013-06-01

    The aim of this review paper is to describe in a comprehensive manner the current understanding of the radiation response of state-of-the-art Silicon-on-Insulator (SOI) and FinFET CMOS technologies. Total Ionizing Dose (TID) response, heavy-ion microdose effects and single-event effects (SEEs) will be discussed. It is shown that a very high TID tolerance can be achieved by narrow-fin SOI FinFET architectures, while bulk FinFETs may exhibit similar TID response to the planar devices. Due to the vertical nature of FinFETs, a specific heavy-ion response can be obtained, whereby the angle of incidence becomes highly important with respect to the vertical sidewall gates. With respect to SEE, the buried oxide in the SOI FinFETs suppresses the diffusion tails from the charge collection in the substrate compared to the planar bulk FinFET devices. Channel lengths and fin widths are now comparable to, or smaller than the dimensions of the region affected by the single ionizing ions or lasers used in testing. This gives rise to a high degree of sensitivity to individual device parameters and source-drain shunting during ion-beam or laser-beam SEE testing. Simulations are used to illuminate the mechanisms observed in radiation testing and the progress and needs for the numerical modeling/simulation of the radiation response of advanced SOI and FinFET transistors are highlighted.

  19. Effect of Binary Source Companions on the Microlensing Optical Depth Determination toward the Galactic Bulge Field

    NASA Astrophysics Data System (ADS)

    Han, Cheongho

    2005-11-01

    Currently, gravitational microlensing survey experiments toward the Galactic bulge field use two different methods of minimizing the blending effect for the accurate determination of the optical depth τ. One is measuring τ based on clump giant (CG) source stars, and the other is using ``difference image analysis'' (DIA) photometry to measure the unblended source flux variation. Despite the expectation that the two estimates should be the same assuming that blending is properly considered, the estimates based on CG stars systematically fall below the DIA results based on all events with source stars down to the detection limit. Prompted by the gap, we investigate the previously unconsidered effect of companion-associated events on τ determination. Although the image of a companion is blended with that of its primary star and thus not resolved, the event associated with the companion can be detected if the companion flux is highly magnified. Therefore, companions work effectively as source stars to microlensing, and thus the neglect of them in the source star count could result in a wrong τ estimation. By carrying out simulations based on the assumption that companions follow the same luminosity function as primary stars, we estimate that the contribution of the companion-associated events to the total event rate is ~5fbi% for current surveys and can reach up to ~6fbi% for future surveys monitoring fainter stars, where fbi is the binary frequency. Therefore, we conclude that the companion-associated events comprise a nonnegligible fraction of all events. However, their contribution to the optical depth is not large enough to explain the systematic difference between the optical depth estimates based on the two different methods.

  20. Impact of a single drop on the same liquid: formation, growth and disintegration of jets

    NASA Astrophysics Data System (ADS)

    Agbaglah, G. Gilou; Deegan, Robert

    2015-11-01

    One of the simplest splashing scenarios results from the impact of a single drop on on the same liquid. The traditional understanding of this process is that the impact generates a jet that later breaks up into secondary droplets. Recently it was shown that even this simplest of scenarios is more complicated than expected because multiple jets can be generated from a single impact event and there are bifurcations in the multiplicity of jets. First, we study the formation, growth and disintegration of jets following the impact of a drop on a thin film of the same liquid using a combination of numerical simulations and linear stability theory. We obtain scaling relations from our simulations and use these as inputs to our stability analysis. We also use experiments and numerical simulations of a single drop impacting on a deep pool to examine the bifurcation from a single jet into two jets. Using high speed X-ray imaging methods we show that vortex separation within the drop leads to the formation of a second jet long after the formation of the ejecta sheet.

  1. Autobiographical memory sources of threats in dreams.

    PubMed

    Lafrenière, Alexandre; Lortie-Lussier, Monique; Dale, Allyson; Robidoux, Raphaëlle; De Koninck, Joseph

    2018-02-01

    Temporal sources of dream threats were examined through the paradigm of the Threat Simulation Theory. Two groups of young adults (18-24 years old), who did not experience severe threatening events in the year preceding their dream and reported a dream either with or without threats, were included. Participants (N = 119) kept a log of daily activities and a dream diary, indicating whether dream components referred to past experiences. The occurrence of oneiric threats correlated with the reporting of threats in the daily logs, their average severity, and the stress level experienced the day preceding the dream. The group whose dreams contained threats had significantly more references to temporal categories beyond one year than the group with dreams without threats. Our findings suggest that in the absence of recent highly negative emotional experiences, the threat simulation system selects memory traces of threatening events experienced in the past. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Methods and Model Dependency of Extreme Event Attribution: The 2015 European Drought

    NASA Astrophysics Data System (ADS)

    Hauser, Mathias; Gudmundsson, Lukas; Orth, René; Jézéquel, Aglaé; Haustein, Karsten; Vautard, Robert; van Oldenborgh, Geert J.; Wilcox, Laura; Seneviratne, Sonia I.

    2017-10-01

    Science on the role of anthropogenic influence on extreme weather events, such as heatwaves or droughts, has evolved rapidly in the past years. The approach of "event attribution" compares the occurrence-probability of an event in the present, factual climate with its probability in a hypothetical, counterfactual climate without human-induced climate change. Several methods can be used for event attribution, based on climate model simulations and observations, and usually researchers only assess a subset of methods and data sources. Here, we explore the role of methodological choices for the attribution of the 2015 meteorological summer drought in Europe. We present contradicting conclusions on the relevance of human influence as a function of the chosen data source and event attribution methodology. Assessments using the maximum number of models and counterfactual climates with pre-industrial greenhouse gas concentrations point to an enhanced drought risk in Europe. However, other evaluations show contradictory evidence. These results highlight the need for a multi-model and multi-method framework in event attribution research, especially for events with a low signal-to-noise ratio and high model dependency such as regional droughts.

  3. The Influence of Aerosol Hygroscopicity on Precipitation Intensity During a Mesoscale Convective Event

    NASA Astrophysics Data System (ADS)

    Kawecki, Stacey; Steiner, Allison L.

    2018-01-01

    We examine how aerosol composition affects precipitation intensity using the Weather and Research Forecasting Model with Chemistry (version 3.6). By changing the prescribed default hygroscopicity values to updated values from laboratory studies, we test model assumptions about individual component hygroscopicity values of ammonium, sulfate, nitrate, and organic species. We compare a baseline simulation (BASE, using default hygroscopicity values) with four sensitivity simulations (SULF, increasing the sulfate hygroscopicity; ORG, decreasing organic hygroscopicity; SWITCH, using a concentration-dependent hygroscopicity value for ammonium; and ALL, including all three changes) to understand the role of aerosol composition on precipitation during a mesoscale convective system (MCS). Overall, the hygroscopicity changes influence the spatial patterns of precipitation and the intensity. Focusing on the maximum precipitation in the model domain downwind of an urban area, we find that changing the individual component hygroscopicities leads to bulk hygroscopicity changes, especially in the ORG simulation. Reducing bulk hygroscopicity (e.g., ORG simulation) initially causes fewer activated drops, weakened updrafts in the midtroposphere, and increased precipitation from larger hydrometeors. Increasing bulk hygroscopicity (e.g., SULF simulation) simulates more numerous and smaller cloud drops and increases precipitation. In the ALL simulation, a stronger cold pool and downdrafts lead to precipitation suppression later in the MCS evolution. In this downwind region, the combined changes in hygroscopicity (ALL) reduces the overprediction of intense events (>70 mm d-1) and better captures the range of moderate intensity (30-60 mm d-1) events. The results of this single MCS analysis suggest that aerosol composition can play an important role in simulating high-intensity precipitation events.

  4. Estimate of radiation damage to low-level electronics of the RF system in the LHC cavities arising from beam gas collisions.

    PubMed

    Butterworth, A; Ferrari, A; Tsoulou, E; Vlachoudis, V; Wijnands, T

    2005-01-01

    Monte Carlo simulations have been performed to estimate the radiation damage induced by high-energy hadrons in the digital electronics of the RF low-level systems in the LHC cavities. High-energy hadrons are generated when the proton beams interact with the residual gas. The contributions from various elements-vacuum chambers, cryogenic cavities, wideband pickups and cryomodule beam tubes-have been considered individually, with each contribution depending on the gas composition and density. The probability of displacement damage and single event effects (mainly single event upsets) is derived for the LHC start-up conditions.

  5. Robust Covariate-Adjusted Log-Rank Statistics and Corresponding Sample Size Formula for Recurrent Events Data

    PubMed Central

    Song, Rui; Kosorok, Michael R.; Cai, Jianwen

    2009-01-01

    Summary Recurrent events data are frequently encountered in clinical trials. This article develops robust covariate-adjusted log-rank statistics applied to recurrent events data with arbitrary numbers of events under independent censoring and the corresponding sample size formula. The proposed log-rank tests are robust with respect to different data-generating processes and are adjusted for predictive covariates. It reduces to the Kong and Slud (1997, Biometrika 84, 847–862) setting in the case of a single event. The sample size formula is derived based on the asymptotic normality of the covariate-adjusted log-rank statistics under certain local alternatives and a working model for baseline covariates in the recurrent event data context. When the effect size is small and the baseline covariates do not contain significant information about event times, it reduces to the same form as that of Schoenfeld (1983, Biometrics 39, 499–503) for cases of a single event or independent event times within a subject. We carry out simulations to study the control of type I error and the comparison of powers between several methods in finite samples. The proposed sample size formula is illustrated using data from an rhDNase study. PMID:18162107

  6. Site correction of a high-frequency strong-ground-motion simulation based on an empirical transfer function

    NASA Astrophysics Data System (ADS)

    Huang, Jyun-Yan; Wen, Kuo-Liang; Lin, Che-Min; Kuo, Chun-Hsiang; Chen, Chun-Te; Chang, Shuen-Chiang

    2017-05-01

    In this study, an empirical transfer function (ETF), which is the spectrum difference in Fourier amplitude spectra between observed strong ground motion and synthetic motion obtained by a stochastic point-source simulation technique, is constructed for the Taipei Basin, Taiwan. The basis stochastic point-source simulations can be treated as reference rock site conditions in order to consider site effects. The parameters of the stochastic point-source approach related to source and path effects are collected from previous well-verified studies. A database of shallow, small-magnitude earthquakes is selected to construct the ETFs so that the point-source approach for synthetic motions might be more widely applicable. The high-frequency synthetic motion obtained from the ETF procedure is site-corrected in the strong site-response area of the Taipei Basin. The site-response characteristics of the ETF show similar responses as in previous studies, which indicates that the base synthetic model is suitable for the reference rock conditions in the Taipei Basin. The dominant frequency contour corresponds to the shape of the bottom of the geological basement (the top of the Tertiary period), which is the Sungshan formation. Two clear high-amplification areas are identified in the deepest region of the Sungshan formation, as shown by an amplification contour of 0.5 Hz. Meanwhile, a high-amplification area was shifted to the basin's edge, as shown by an amplification contour of 2.0 Hz. Three target earthquakes with different kinds of source conditions, including shallow small-magnitude events, shallow and relatively large-magnitude events, and deep small-magnitude events relative to the ETF database, are tested to verify site correction. The results indicate that ETF-based site correction is effective for shallow earthquakes, even those with higher magnitudes, but is not suitable for deep earthquakes. Finally, one of the most significant shallow large-magnitude earthquakes (the 1999 Chi-Chi earthquake in Taiwan) is verified in this study. A finite fault stochastic simulation technique is applied, owing to the complexity of the fault rupture process for the Chi-Chi earthquake, and the ETF-based site-correction function is multiplied to obtain a precise simulation of high-frequency (up to 10 Hz) strong motions. The high-frequency prediction has good agreement in both time and frequency domain in this study, and the prediction level is the same as that predicted by the site-corrected ground motion prediction equation.

  7. Laboratory investigation of flux reduction from dense non-aqueous phase liquid (DNAPL) partial source zone remediation by enhanced dissolution.

    PubMed

    Kaye, Andrew J; Cho, Jaehyun; Basu, Nandita B; Chen, Xiaosong; Annable, Michael D; Jawitz, James W

    2008-11-14

    This study investigated the benefits of partial removal of dense nonaqueous phase liquid (DNAPL) source zones using enhanced dissolution in eight laboratory scale experiments. The benefits were assessed by characterizing the relationship between reductions in DNAPL mass and the corresponding reduction in contaminant mass flux. Four flushing agents were evaluated in eight controlled laboratory experiments to examine the effects of displacement fluid property contrasts and associated override and underride on contaminant flux reduction (R(j)) vs. mass reduction (R(m)) relationships (R(j)(R(m))): 1) 50% ethanol/50% water (less dense than water), 2) 40% ethyl-lactate/60% water (more dense than water), 3) 18% ethanol/26% ethyl-lactate/56% water (neutrally buoyant), and 4) 2% Tween-80 surfactant (also neutrally buoyant). For each DNAPL architecture evaluated, replicate experiments were conducted where source zone dissolution was conducted with a single flushing event to remove most of the DNAPL from the system, and with multiple shorter-duration floods to determine the path of the R(j)(R(m)) relationship. All of the single-flushing experiments exhibited similar R(j)(R(m)) relationships indicating that override and underride effects associated with cosolvents did not significantly affect the remediation performance of the agents. The R(j)(R(m)) relationship of the multiple injection experiments for the cosolvents with a density contrast with water tended to be less desirable in the sense that there was less R(j) for a given R(m). UTCHEM simulations supported the observations from the laboratory experiments and demonstrated the capability of this model to predict R(j)(R(m)) relationships for non-uniformly distributed NAPL sources.

  8. Tsunami evacuation plans for future megathrust earthquakes in Padang, Indonesia, considering stochastic earthquake scenarios

    NASA Astrophysics Data System (ADS)

    Muhammad, Ario; Goda, Katsuichiro; Alexander, Nicholas A.; Kongko, Widjo; Muhari, Abdul

    2017-12-01

    This study develops tsunami evacuation plans in Padang, Indonesia, using a stochastic tsunami simulation method. The stochastic results are based on multiple earthquake scenarios for different magnitudes (Mw 8.5, 8.75, and 9.0) that reflect asperity characteristics of the 1797 historical event in the same region. The generation of the earthquake scenarios involves probabilistic models of earthquake source parameters and stochastic synthesis of earthquake slip distributions. In total, 300 source models are generated to produce comprehensive tsunami evacuation plans in Padang. The tsunami hazard assessment results show that Padang may face significant tsunamis causing the maximum tsunami inundation height and depth of 15 and 10 m, respectively. A comprehensive tsunami evacuation plan - including horizontal evacuation area maps, assessment of temporary shelters considering the impact due to ground shaking and tsunami, and integrated horizontal-vertical evacuation time maps - has been developed based on the stochastic tsunami simulation results. The developed evacuation plans highlight that comprehensive mitigation policies can be produced from the stochastic tsunami simulation for future tsunamigenic events.

  9. The influence of preferential flow on pressure propagation and landslide triggering of the Rocca Pitigliana landslide

    NASA Astrophysics Data System (ADS)

    Shao, Wei; Bogaard, Thom; Bakker, Mark; Berti, Matteo

    2016-12-01

    The fast pore water pressure response to rain events is an important triggering factor for slope instability. The fast pressure response may be caused by preferential flow that bypasses the soil matrix. Currently, most of the hydro-mechanical models simulate pore water pressure using a single-permeability model, which cannot quantify the effects of preferential flow on pressure propagation and landslide triggering. Previous studies showed that a model based on the linear-diffusion equation can simulate the fast pressure propagation in near-saturated landslides such as the Rocca Pitigliana landslide. In such a model, the diffusion coefficient depends on the degree of saturation, which makes it difficult to use the model for predictions. In this study, the influence of preferential flow on pressure propagation and slope stability is investigated with a 1D dual-permeability model coupled with an infinite-slope stability approach. The dual-permeability model uses two modified Darcy-Richards equations to simultaneously simulate the matrix flow and preferential flow in hillslopes. The simulated pressure head is used in an infinite-slope stability analysis to identify the influence of preferential flow on the fast pressure response and landslide triggering. The dual-permeability model simulates the height and arrival of the pressure peak reasonably well. Performance of the dual-permeability model is as good as or better than the linear-diffusion model even though the dual-permeability model is calibrated for two single pulse rain events only, while the linear-diffusion model is calibrated for each rain event separately. In conclusion, the 1D dual-permeability model is a promising tool for landslides under similar conditions.

  10. The impact of clustering and angular resolution on far-infrared and millimeter continuum observations

    NASA Astrophysics Data System (ADS)

    Béthermin, Matthieu; Wu, Hao-Yi; Lagache, Guilaine; Davidzon, Iary; Ponthieu, Nicolas; Cousin, Morgane; Wang, Lingyu; Doré, Olivier; Daddi, Emanuele; Lapi, Andrea

    2017-11-01

    Follow-up observations at high-angular resolution of bright submillimeter galaxies selected from deep extragalactic surveys have shown that the single-dish sources are comprised of a blend of several galaxies. Consequently, number counts derived from low- and high-angular-resolution observations are in tension. This demonstrates the importance of resolution effects at these wavelengths and the need for realistic simulations to explore them. We built a new 2 deg2 simulation of the extragalactic sky from the far-infrared to the submillimeter. It is based on an updated version of the 2SFM (two star-formation modes) galaxy evolution model. Using global galaxy properties generated by this model, we used an abundance-matching technique to populate a dark-matter lightcone and thus simulate the clustering. We produced maps from this simulation and extracted the sources, and we show that the limited angular resolution of single-dish instruments has a strong impact on (sub)millimeter continuum observations. Taking into account these resolution effects, we are reproducing a large set of observables, as number counts and their evolution with redshift and cosmic infrared background power spectra. Our simulation consistently describes the number counts from single-dish telescopes and interferometers. In particular, at 350 and 500 μm, we find that the number counts measured by Herschel between 5 and 50 mJy are biased towards high values by a factor 2, and that the redshift distributions are biased towards low redshifts. We also show that the clustering has an important impact on the Herschel pixel histogram used to derive number counts from P(D) analysis. We find that the brightest galaxy in the beam of a 500 μm Herschel source contributes on average to only 60% of the Herschel flux density, but that this number will rise to 95% for future millimeter surveys on 30 m-class telescopes (e.g., NIKA2 at IRAM). Finally, we show that the large number density of red Herschel sources found in observations but not in models might be an observational artifact caused by the combination of noise, resolution effects, and the steepness of color- and flux density distributions. Our simulation, called Simulated Infrared Dusty Extragalactic Sky (SIDES), is publicly available. Our simulation Simulated Infrared Dusty Extragalactic Sky (SIDES) is available at http://cesam.lam.fr/sides.

  11. Investigation of 2‐stage meta‐analysis methods for joint longitudinal and time‐to‐event data through simulation and real data application

    PubMed Central

    Tudur Smith, Catrin; Gueyffier, François; Kolamunnage‐Dona, Ruwanthi

    2017-01-01

    Background Joint modelling of longitudinal and time‐to‐event data is often preferred over separate longitudinal or time‐to‐event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time‐to‐event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta‐analysis of joint model estimates from multiple studies. Methods We propose a 2‐stage method for meta‐analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta‐analyses of separate longitudinal or time‐to‐event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Results Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta‐analytic setting where association exists between the longitudinal and time‐to‐event outcomes. Conclusions Where evidence of association between longitudinal and time‐to‐event outcomes exists, results from joint models over standalone analyses should be pooled in 2‐stage meta‐analyses. PMID:29250814

  12. SRAM Based Re-programmable FPGA for Space Applications

    NASA Technical Reports Server (NTRS)

    Wang, J. J.; Sun, J. S.; Cronquist, B. E.; McCollum, J. L.; Speers, T. M.; Plants, W. C.; Katz, R. B.

    1999-01-01

    An SRAM (static random access memory)-based reprogrammable FPGA (field programmable gate array) is investigated for space applications. A new commercial prototype, named the RS family, was used as an example for the investigation. The device is fabricated in a 0.25 micrometers CMOS technology. Its architecture is reviewed to provide a better understanding of the impact of single event upset (SEU) on the device during operation. The SEU effect of different memories available on the device is evaluated. Heavy ion test data and SPICE simulations are used integrally to extract the threshold LET (linear energy transfer). Together with the saturation cross-section measurement from the layout, a rate prediction is done on each memory type. The SEU in the configuration SRAM is identified as the dominant failure mode and is discussed in detail. The single event transient error in combinational logic is also investigated and simulated by SPICE. SEU mitigation by hardening the memories and employing EDAC (error detection and correction) at the device level are presented. For the configuration SRAM (CSRAM) cell, the trade-off between resistor de-coupling and redundancy hardening techniques are investigated with interesting results. Preliminary heavy ion test data show no sign of SEL (single event latch-up). With regard to ionizing radiation effects, the increase in static leakage current (static I(sub CC)) measured indicates a device tolerance of approximately 50krad(Si).

  13. Towards tracer dose reduction in PET studies: Simulation of dose reduction by retrospective randomized undersampling of list-mode data.

    PubMed

    Gatidis, Sergios; Würslin, Christian; Seith, Ferdinand; Schäfer, Jürgen F; la Fougère, Christian; Nikolaou, Konstantin; Schwenzer, Nina F; Schmidt, Holger

    2016-01-01

    Optimization of tracer dose regimes in positron emission tomography (PET) imaging is a trade-off between diagnostic image quality and radiation exposure. The challenge lies in defining minimal tracer doses that still result in sufficient diagnostic image quality. In order to find such minimal doses, it would be useful to simulate tracer dose reduction as this would enable to study the effects of tracer dose reduction on image quality in single patients without repeated injections of different amounts of tracer. The aim of our study was to introduce and validate a method for simulation of low-dose PET images enabling direct comparison of different tracer doses in single patients and under constant influencing factors. (18)F-fluoride PET data were acquired on a combined PET/magnetic resonance imaging (MRI) scanner. PET data were stored together with the temporal information of the occurrence of single events (list-mode format). A predefined proportion of PET events were then randomly deleted resulting in undersampled PET data. These data sets were subsequently reconstructed resulting in simulated low-dose PET images (retrospective undersampling of list-mode data). This approach was validated in phantom experiments by visual inspection and by comparison of PET quality metrics contrast recovery coefficient (CRC), background-variability (BV) and signal-to-noise ratio (SNR) of measured and simulated PET images for different activity concentrations. In addition, reduced-dose PET images of a clinical (18)F-FDG PET dataset were simulated using the proposed approach. (18)F-PET image quality degraded with decreasing activity concentrations with comparable visual image characteristics in measured and in corresponding simulated PET images. This result was confirmed by quantification of image quality metrics. CRC, SNR and BV showed concordant behavior with decreasing activity concentrations for measured and for corresponding simulated PET images. Simulation of dose-reduced datasets based on clinical (18)F-FDG PET data demonstrated the clinical applicability of the proposed data. Simulation of PET tracer dose reduction is possible with retrospective undersampling of list-mode data. Resulting simulated low-dose images have equivalent characteristics with PET images actually measured at lower doses and can be used to derive optimal tracer dose regimes.

  14. Detector Simulations with DD4hep

    NASA Astrophysics Data System (ADS)

    Petrič, M.; Frank, M.; Gaede, F.; Lu, S.; Nikiforou, N.; Sailer, A.

    2017-10-01

    Detector description is a key component of detector design studies, test beam analyses, and most of particle physics experiments that require the simulation of more and more different detector geometries and event types. This paper describes DD4hep, which is an easy-to-use yet flexible and powerful detector description framework that can be used for detector simulation and also extended to specific needs for a particular working environment. Linear collider detector concepts ILD, SiD and CLICdp as well as detector development collaborations CALICE and FCal have chosen to adopt the DD4hep geometry framework and its DDG4 pathway to Geant4 as its core simulation and reconstruction tools. The DDG4 plugins suite includes a wide variety of input formats, provides access to the Geant4 particle gun or general particles source and allows for handling of Monte Carlo truth information, eg. by linking hits and the primary particle that caused them, which is indispensable for performance and efficiency studies. An extendable array of segmentations and sensitive detectors allows the simulation of a wide variety of detector technologies. This paper shows how DD4hep allows to perform complex Geant4 detector simulations without compiling a single line of additional code by providing a palette of sub-detector components that can be combined and configured via compact XML files. Simulation is controlled either completely via the command line or via simple Python steering files interpreted by a Python executable. It also discusses how additional plugins and extensions can be created to increase the functionality.

  15. Problems encountered with the use of simulation in an attempt to enhance interpretation of a secondary data source in epidemiologic mental health research

    PubMed Central

    2010-01-01

    Background The longitudinal epidemiology of major depressive episodes (MDE) is poorly characterized in most countries. Some potentially relevant data sources may be underutilized because they are not conducive to estimating the most salient epidemiologic parameters. An available data source in Canada provides estimates that are potentially valuable, but that are difficult to apply in clinical or public health practice. For example, weeks depressed in the past year is assessed in this data source whereas episode duration would be of more interest. The goal of this project was to derive, using simulation, more readily interpretable parameter values from the available data. Findings The data source was a Canadian longitudinal study called the National Population Health Survey (NPHS). A simulation model representing the course of depressive episodes was used to reshape estimates deriving from binary and ordinal logistic models (fit to the NPHS data) into equations more capable of informing clinical and public health decisions. Discrete event simulation was used for this purpose. Whereas the intention was to clarify a complex epidemiology, the models themselves needed to become excessively complex in order to provide an accurate description of the data. Conclusions Simulation methods are useful in circumstances where a representation of a real-world system has practical value. In this particular scenario, the usefulness of simulation was limited both by problems with the data source and by inherent complexity of the underlying epidemiology. PMID:20796271

  16. Injection Efficiency of Low-energy Particles at Oblique Shocks with a Focused Transport Model

    NASA Astrophysics Data System (ADS)

    Zuo, P.; Zhang, M.; Rassoul, H.

    2013-12-01

    There is strong evidence that a small portion of thermal and suprathermal particles from hot coronal material or remnants of previous solar energetic particle (SEP) events serve as the source of large SEP events (Desai et al. 2006). To build more powerful SEP models, it is necessary to model the detailed particle injection and acceleration process for source particles especially at lower energies. We present a test particle simulation on the injection and acceleration of low-energy suprathermal particles by Laminar nonrelativistic oblique shocks in the framework of the focused transport theory, which is proved to contain all necessary physics of shock acceleration, but avoid the limitation of diffusive shock acceleration (DSA). The injection efficiency as a function of Mach number, obliquity, injection speed, shock strength, cross-shock potential and the degree of turbulence is calculated. This test particle simulation proves that the focused transport theory is an extension of DSA theory with the capability of predicting the efficiency of particle injection. The results can be applied to modeling the SEP acceleration from source particles.

  17. An investigation into pilot and system response to critical in-flight events. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Rockwell, T. H.; Griffin, W. C.

    1981-01-01

    Critical in-flight events (CIFE) that threaten the aircraft were studied. The scope of the CIFE was described and defined with emphasis on characterizing event development, detection and assessment; pilot information requirements, sources, acquisition, and interpretation, pilot response options, decision processed, and decision implementation and event outcome. Detailed scenarios were developed for use in simulators and paper and pencil testing for developing relationships between pilot performance and background information as well as for an analysis of pilot reaction decision and feedback processes. Statistical relationships among pilot characteristics and observed responses to CIFE's were developed.

  18. Analysis of mutational changes at the HLA locus in single human sperm.

    PubMed

    Huang, M M; Erlich, H A; Goodman, M F; Arnheim, N

    1995-01-01

    Using a simple and efficient single sperm PCR and direct sequencing method, we screened for HLA-DPB1 gene mutations that may give rise to new alleles at this highly polymorphic locus. More than 800 single sperm were studied from a heterozygous individual whose two alleles carried 16 nucleotide sequence differences clustered in six polymorphic regions. A potential microgene conversion event was detected. Unrepaired heteroduplex DNA similar to that which gives rise to postmeiotic segregation events in yeast was observed in three cases. Control experiments also revealed unusual sperm from DPB1 homozygous individuals. The data may help explain allelic diversity in the MHC and suggest that a possible source of human mosaicism may be incomplete DNA mismatch repair during gametogenesis.

  19. Single Versus Multiple Events Error Potential Detection in a BCI-Controlled Car Game With Continuous and Discrete Feedback.

    PubMed

    Kreilinger, Alex; Hiebel, Hannah; Müller-Putz, Gernot R

    2016-03-01

    This work aimed to find and evaluate a new method for detecting errors in continuous brain-computer interface (BCI) applications. Instead of classifying errors on a single-trial basis, the new method was based on multiple events (MEs) analysis to increase the accuracy of error detection. In a BCI-driven car game, based on motor imagery (MI), discrete events were triggered whenever subjects collided with coins and/or barriers. Coins counted as correct events, whereas barriers were errors. This new method, termed ME method, combined and averaged the classification results of single events (SEs) and determined the correctness of MI trials, which consisted of event sequences instead of SEs. The benefit of this method was evaluated in an offline simulation. In an online experiment, the new method was used to detect erroneous MI trials. Such MI trials were discarded and could be repeated by the users. We found that, even with low SE error potential (ErrP) detection rates, feasible accuracies can be achieved when combining MEs to distinguish erroneous from correct MI trials. Online, all subjects reached higher scores with error detection than without, at the cost of longer times needed for completing the game. Findings suggest that ErrP detection may become a reliable tool for monitoring continuous states in BCI applications when combining MEs. This paper demonstrates a novel technique for detecting errors in online continuous BCI applications, which yields promising results even with low single-trial detection rates.

  20. Magnetic field enhanced resonant tunneling in a silicon nanowire single-electron-transistor.

    PubMed

    Aravind, K; Lin, M C; Ho, I L; Wu, C S; Kuo, Watson; Kuan, C H; Chang-Liao, K S; Chen, C D

    2012-03-01

    We report fabrication, measurement and simulation of silicon single-electron-transistors made on silicon-on-insulator wafers. At T-2 K, these devices showed clear Coulomb blockade structures. An external perpendicular magnetic field was found to enhance the resonant tunneling peak and was used to predict the presence of two laterally coupled quantum dots in the narrow constriction between the source-drain electrodes. The proposed model and measured experimental data were consistently explained using numerical simulations.

  1. A source number estimation method for single optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu

    2015-10-01

    The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.

  2. Comparison of hybrid and pure Monte Carlo shower generators on an event by event basis

    NASA Astrophysics Data System (ADS)

    Allen, J.; Drescher, H.-J.; Farrar, G.

    SENECA is a hybrid air shower simulation written by H. Drescher that utilizes both Monte Carlo simulation and cascade equations. By using the cascade equations only in the high energy portion of the shower, where they are extremely accurate, SENECA is able to utilize the advantages in speed from the cascade equations yet still produce complete, three dimensional particle distributions at ground level. We present a comparison, on an event by event basis, of SENECA and CORSIKA, a well trusted MC simulation. By using the same first interaction in both SENECA and CORSIKA, the effect of the cascade equations can be studied within a single shower, rather than averages over many showers. Our study shows that for showers produced in this manner, SENECA agrees with CORSIKA to a very high accuracy as to densities, energies, and timing information for individual species of ground-level particles from both iron and proton primaries with energies between 1EeV and 100EeV. Used properly, SENECA produces ground particle distributions virtually indistinguishable from those of CORSIKA in a fraction of the time. For example, for a shower induced by a 40 EeV proton simulated with 10-6 thinning, SENECA is 10 times faster than CORSIKA.

  3. Source parameter inversion of compound earthquakes on GPU/CPU hybrid platform

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Ni, S.; Chen, W.

    2012-12-01

    Source parameter of earthquakes is essential problem in seismology. Accurate and timely determination of the earthquake parameters (such as moment, depth, strike, dip and rake of fault planes) is significant for both the rupture dynamics and ground motion prediction or simulation. And the rupture process study, especially for the moderate and large earthquakes, is essential as the more detailed kinematic study has became the routine work of seismologists. However, among these events, some events behave very specially and intrigue seismologists. These earthquakes usually consist of two similar size sub-events which occurred with very little time interval, such as mb4.5 Dec.9, 2003 in Virginia. The studying of these special events including the source parameter determination of each sub-events will be helpful to the understanding of earthquake dynamics. However, seismic signals of two distinctive sources are mixed up bringing in the difficulty of inversion. As to common events, the method(Cut and Paste) has been proven effective for resolving source parameters, which jointly use body wave and surface wave with independent time shift and weights. CAP could resolve fault orientation and focal depth using a grid search algorithm. Based on this method, we developed an algorithm(MUL_CAP) to simultaneously acquire parameters of two distinctive events. However, the simultaneous inversion of both sub-events make the computation very time consuming, so we develop a hybrid GPU and CPU version of CAP(HYBRID_CAP) to improve the computation efficiency. Thanks to advantages on multiple dimension storage and processing in GPU, we obtain excellent performance of the revised code on GPU-CPU combined architecture and the speedup factors can be as high as 40x-90x compared to classical cap on traditional CPU architecture.As the benchmark, we take the synthetics as observation and inverse the source parameters of two given sub-events and the inversion results are very consistent with the true parameters. For the events in Virginia, USA on 9 Dec, 2003, we re-invert source parameters and detailed analysis of regional waveform indicates that Virginia earthquake included two sub-events which are Mw4.05 and Mw4.25 at the same depth of 10km with focal mechanism of strike65/dip32/rake135, which are consistent with previous study. Moreover, compared to traditional two-source model method, MUL_CAP is more automatic with no need for human intervention.

  4. Annual Conference on Nuclear and Space Radiation Effects, 18th, University of Washington, Seattle, WA, July 21-24, 1981, Proceedings

    NASA Technical Reports Server (NTRS)

    Tasca, D. M.

    1981-01-01

    Single event upset phenomena are discussed, taking into account cosmic ray induced errors in IIL microprocessors and logic devices, single event upsets in NMOS microprocessors, a prediction model for bipolar RAMs in a high energy ion/proton environment, the search for neutron-induced hard errors in VLSI structures, soft errors due to protons in the radiation belt, and the use of an ion microbeam to study single event upsets in microcircuits. Basic mechanisms in materials and devices are examined, giving attention to gamma induced noise in CCD's, the annealing of MOS capacitors, an analysis of photobleaching techniques for the radiation hardening of fiber optic data links, a hardened field insulator, the simulation of radiation damage in solids, and the manufacturing of radiation resistant optical fibers. Energy deposition and dosimetry is considered along with SGEMP/IEMP, radiation effects in devices, space radiation effects and spacecraft charging, EMP/SREMP, and aspects of fabrication, testing, and hardness assurance.

  5. Differential and Dose-Dependent Inflammatory Responses in a Mouse Model of Respirable Instillation of Environmental Diesel, Emission-Source Diesel , Emmission-Source Diesel and Ambient Air Pollution Particles In Vivo.

    EPA Science Inventory

    Rationale: Previously, we found that ambient particulate matter (APM) activates pulmonary dendritic cells in vitro. We hypothesized that single acute exposures to PM would promote inflammatory activation of the lung in vivo and provide information on early immunological events of...

  6. On the wind-induced undercatch in rainfall measurement using CFD-based simulations

    NASA Astrophysics Data System (ADS)

    Colli, Matteo; Lanza, Luca

    2016-04-01

    The reliability of liquid atmospheric precipitation measurements is a basic requirement since rainfall data represent the fundamental input variables of many scientific applications (hydrologic models, weather forecasting data assimilation, climate change studies, calibration of weather radar, etc.). The scientific community and the National Meteorological Services worldwide are facing the issue of improving the accuracy of precipitation measurements, with an increased focus on retrieving the information at a high temporal resolution. The rainfall intensity is indeed fundamental information for the precise quantification of the markedly time-varying behavior of precipitation events. Environmental conditions have a relevant impact on the rain collection/sensing efficiency. Among other effects, wind is recognized as a major source of underestimation since it reduces the collection efficiency of the catching-type gauges (Nespor and Sevruk, 1999), the most common type of instruments used worldwide in the national observation networks. The collection efficiency is usually obtained by comparing the rainfall amounts measured by the gauge with the reference, which was defined by EN-13798 standard (CEN, 2002) as a gauge placed below the ground level inside a pit. A lot of scatter can be observed for a given wind speed, which is mainly caused by comparability issues among the tested gauges. An additional source of uncertainty is the drops size distribution (DSD) of the rain, which varies on an event-by-event basis. The goal of this study is to understand the role of the physical characteristics of precipitation particles on the wind-induced rainfall underestimation observed for catching-type gauges. To address this issue, a detailed analysis of the flow field in the vicinity of the gauge is conducted using time-averaged computational fluid dynamics (CFD) simulations (Colli et al., 2015). Using a Lagrangian model, which accounts for the hydrodynamic behavior of liquid particles in the atmosphere, droplets trajectories are calculated to obtain the collection efficiency associated with different drop size distribution and varying the wind speed. The main benefit of investigating this error by means of CFD simulations is the possibility to single out the prevailing environmental factors from the instrumental performance of the gauges under analysis. The preliminary analysis shows the variations in the catch efficiency due to the horizontal wind speeds and the DSD. Overall, this study contributes to a better understanding of the environmental sources of uncertainty in rainfall measurements. References: Colli, M., R. Rasmussen, J. M. Theriault, L. G. Lanza, C. B. Baker & J. Kochendorfer (2015) An Improved Trajectory Model to Evaluate the Collection Performance of Snow Gauges. Journal of Applied Meteorology and Climatology, 54, 1826-1836 Nespor, V. and Sevruk, B. (1999). Estimation of wind-induced error of rainfall gauge measurements using a numerical simulation. J. Atmos. Ocean. Tech, 16(4), 450-464. CEN (2002). EN 13798:2002 Hydrometry - Specification for a reference raingauge pit. European Committee for Standardization.

  7. Stochastic Earthquake Rupture Modeling Using Nonparametric Co-Regionalization

    NASA Astrophysics Data System (ADS)

    Lee, Kyungbook; Song, Seok Goo

    2017-09-01

    Accurate predictions of the intensity and variability of ground motions are essential in simulation-based seismic hazard assessment. Advanced simulation-based ground motion prediction methods have been proposed to complement the empirical approach, which suffers from the lack of observed ground motion data, especially in the near-source region for large events. It is important to quantify the variability of the earthquake rupture process for future events and to produce a number of rupture scenario models to capture the variability in simulation-based ground motion predictions. In this study, we improved the previously developed stochastic earthquake rupture modeling method by applying the nonparametric co-regionalization, which was proposed in geostatistics, to the correlation models estimated from dynamically derived earthquake rupture models. The nonparametric approach adopted in this study is computationally efficient and, therefore, enables us to simulate numerous rupture scenarios, including large events ( M > 7.0). It also gives us an opportunity to check the shape of true input correlation models in stochastic modeling after being deformed for permissibility. We expect that this type of modeling will improve our ability to simulate a wide range of rupture scenario models and thereby predict ground motions and perform seismic hazard assessment more accurately.

  8. Modeling solar energetic particle events using ENLIL heliosphere simulations

    NASA Astrophysics Data System (ADS)

    Luhmann, J. G.; Mays, M. L.; Odstrcil, D.; Li, Yan; Bain, H.; Lee, C. O.; Galvin, A. B.; Mewaldt, R. A.; Cohen, C. M. S.; Leske, R. A.; Larson, D.; Futaana, Y.

    2017-07-01

    Solar energetic particle (SEP) event modeling has gained renewed attention in part because of the availability of a decade of multipoint measurements from STEREO and L1 spacecraft at 1 AU. These observations are coupled with improving simulations of the geometry and strength of heliospheric shocks obtained by using coronagraph images to send erupted material into realistic solar wind backgrounds. The STEREO and ACE measurements in particular have highlighted the sometimes surprisingly widespread nature of SEP events. It is thus an opportune time for testing SEP models, which typically focus on protons 1-100 MeV, toward both physical insight to these observations and potentially useful space radiation environment forecasting tools. Some approaches emphasize the concept of particle acceleration and propagation from close to the Sun, while others emphasize the local field line connection to a traveling, evolving shock source. Among the latter is the previously introduced SEPMOD treatment, based on the widely accessible and well-exercised WSA-ENLIL-cone model. SEPMOD produces SEP proton time profiles at any location within the ENLIL domain. Here we demonstrate a SEPMOD version that accommodates multiple, concurrent shock sources occurring over periods of several weeks. The results illustrate the importance of considering longer-duration time periods and multiple CME contributions in analyzing, modeling, and forecasting SEP events.

  9. Time-Bin-Encoded Boson Sampling with a Single-Photon Device.

    PubMed

    He, Yu; Ding, X; Su, Z-E; Huang, H-L; Qin, J; Wang, C; Unsleber, S; Chen, C; Wang, H; He, Y-M; Wang, X-L; Zhang, W-J; Chen, S-J; Schneider, C; Kamp, M; You, L-X; Wang, Z; Höfling, S; Lu, Chao-Yang; Pan, Jian-Wei

    2017-05-12

    Boson sampling is a problem strongly believed to be intractable for classical computers, but can be naturally solved on a specialized photonic quantum simulator. Here, we implement the first time-bin-encoded boson sampling using a highly indistinguishable (∼94%) single-photon source based on a single quantum-dot-micropillar device. The protocol requires only one single-photon source, two detectors, and a loop-based interferometer for an arbitrary number of photons. The single-photon pulse train is time-bin encoded and deterministically injected into an electrically programmable multimode network. The observed three- and four-photon boson sampling rates are 18.8 and 0.2 Hz, respectively, which are more than 100 times faster than previous experiments based on parametric down-conversion.

  10. Analysis and visualization of single-trial event-related potentials

    NASA Technical Reports Server (NTRS)

    Jung, T. P.; Makeig, S.; Westerfield, M.; Townsend, J.; Courchesne, E.; Sejnowski, T. J.

    2001-01-01

    In this study, a linear decomposition technique, independent component analysis (ICA), is applied to single-trial multichannel EEG data from event-related potential (ERP) experiments. Spatial filters derived by ICA blindly separate the input data into a sum of temporally independent and spatially fixed components arising from distinct or overlapping brain or extra-brain sources. Both the data and their decomposition are displayed using a new visualization tool, the "ERP image," that can clearly characterize single-trial variations in the amplitudes and latencies of evoked responses, particularly when sorted by a relevant behavioral or physiological variable. These tools were used to analyze data from a visual selective attention experiment on 28 control subjects plus 22 neurological patients whose EEG records were heavily contaminated with blink and other eye-movement artifacts. Results show that ICA can separate artifactual, stimulus-locked, response-locked, and non-event-related background EEG activities into separate components, a taxonomy not obtained from conventional signal averaging approaches. This method allows: (1) removal of pervasive artifacts of all types from single-trial EEG records, (2) identification and segregation of stimulus- and response-locked EEG components, (3) examination of differences in single-trial responses, and (4) separation of temporally distinct but spatially overlapping EEG oscillatory activities with distinct relationships to task events. The proposed methods also allow the interaction between ERPs and the ongoing EEG to be investigated directly. We studied the between-subject component stability of ICA decomposition of single-trial EEG epochs by clustering components with similar scalp maps and activation power spectra. Components accounting for blinks, eye movements, temporal muscle activity, event-related potentials, and event-modulated alpha activities were largely replicated across subjects. Applying ICA and ERP image visualization to the analysis of sets of single trials from event-related EEG (or MEG) experiments can increase the information available from ERP (or ERF) data. Copyright 2001 Wiley-Liss, Inc.

  11. Local Infrasound Variability Related to In Situ Atmospheric Observation

    NASA Astrophysics Data System (ADS)

    Kim, Keehoon; Rodgers, Arthur; Seastrand, Douglas

    2018-04-01

    Local infrasound is widely used to constrain source parameters of near-surface events (e.g., chemical explosions and volcanic eruptions). While atmospheric conditions are critical to infrasound propagation and source parameter inversion, local atmospheric variability is often ignored by assuming homogeneous atmospheres, and their impact on the source inversion uncertainty has never been accounted for due to the lack of quantitative understanding of infrasound variability. We investigate atmospheric impacts on local infrasound propagation by repeated explosion experiments with a dense acoustic network and in situ atmospheric measurement. We perform full 3-D waveform simulations with local atmospheric data and numerical weather forecast model to quantify atmosphere-dependent infrasound variability and address the advantage and restriction of local weather data/numerical weather model for sound propagation simulation. Numerical simulations with stochastic atmosphere models also showed nonnegligible influence of atmospheric heterogeneity on infrasound amplitude, suggesting an important role of local turbulence.

  12. Cosmogenic activation of germanium used for tonne-scale rare event search experiments

    NASA Astrophysics Data System (ADS)

    Wei, W.-Z.; Mei, D.-M.; Zhang, C.

    2017-11-01

    We report a comprehensive study of cosmogenic activation of germanium used for tonne-scale rare event search experiments. The germanium exposure to cosmic rays on the Earth's surface are simulated with and without a shielding container using Geant4 for a given cosmic muon, neutron, and proton energy spectrum. The production rates of various radioactive isotopes are obtained for different sources separately. We find that fast neutron induced interactions dominate the production rate of cosmogenic activation. Geant4-based simulation results are compared with the calculation of ACTIVIA and the available experimental data. A reasonable agreement between Geant4 simulations and several experimental data sets is presented. We predict that cosmogenic activation of germanium can set limits to the sensitivity of the next generation of tonne-scale experiments.

  13. FastSim: A Fast Simulation for the SuperB Detector

    NASA Astrophysics Data System (ADS)

    Andreassen, R.; Arnaud, N.; Brown, D. N.; Burmistrov, L.; Carlson, J.; Cheng, C.-h.; Di Simone, A.; Gaponenko, I.; Manoni, E.; Perez, A.; Rama, M.; Roberts, D.; Rotondo, M.; Simi, G.; Sokoloff, M.; Suzuki, A.; Walsh, J.

    2011-12-01

    We have developed a parameterized (fast) simulation for detector optimization and physics reach studies of the proposed SuperB Flavor Factory in Italy. Detector components are modeled as thin sections of planes, cylinders, disks or cones. Particle-material interactions are modeled using simplified cross-sections and formulas. Active detectors are modeled using parameterized response functions. Geometry and response parameters are configured using xml files with a custom-designed schema. Reconstruction algorithms adapted from BaBar are used to build tracks and clusters. Multiple sources of background signals can be merged with primary signals. Pattern recognition errors are modeled statistically by randomly misassigning nearby tracking hits. Standard BaBar analysis tuples are used as an event output. Hadronic B meson pair events can be simulated at roughly 10Hz.

  14. Impact of animal waste application on runoff water quality in field experimental plots.

    PubMed

    Hill, Dagne D; Owens, William E; Tchoounwou, Paul B

    2005-08-01

    Animal waste from dairy and poultry operations is an economical and commonly used fertilizer in the state of Louisiana. The application of animal waste to pasture lands not only is a source of fertilizer, but also allows for a convenient method of waste disposal. The disposal of animal wastes on land is a potential nonpoint source of water degradation. Water degradation and human health is a major concern when considering the disposal of large quantities of animal waste. The objective of this research was to determine the effect of animal waste application on biological (fecal coliform, Enterobacter spp. and Escherichia coli) and physical/chemical (temperature, pH, nitrate nitrogen, ammonia nitrogen, phosphate, copper, zinc, and sulfate) characteristics of runoff water in experimental plots. The effects of the application of animal waste have been evaluated by utilizing experimental plots and simulated rainfall events. Samples of runoff water were collected and analyzed for fecal coliforms. Fecal coliforms isolated from these samples were identified to the species level. Chemical analysis was performed following standard test protocols. An analysis of temperature, ammonia nitrogen, nitrate nitrogen, iron, copper, phosphate, potassium, sulfate, zinc and bacterial levels was performed following standard test protocols as presented in Standard Methods for the Examination of Water and Wastewater [1]. In the experimental plots, less time was required in the tilled broiler litter plots for the measured chemicals to decrease below the initial pre-treatment levels. A decrease of over 50% was noted between the first and second rainfall events for sulfate levels. This decrease was seen after only four simulated rainfall events in tilled broiler litter plots whereas broiler litter plots required eight simulated rainfall events to show this same type of reduction. A reverse trend was seen in the broiler litter plots and the tilled broiler plots for potassium. Bacteria numbers present after the simulated rainfall events were above 200/100 ml of sample water. It can be concluded that: 1) non-point source pollution has a significant effect on bacterial and nutrients levels in runoff water and in water resources; 2) land application of animal waste for soil fertilization makes a significant contribution to water pollution; 3) the use of tilling can significantly reduce the amount of nutrients available in runoff water.

  15. Impact of Animal Waste Application on Runoff Water Quality in Field Experimental Plots

    PubMed Central

    Hill, Dagne D.; Owens, William E.; Tchounwou, Paul B.

    2005-01-01

    Animal waste from dairy and poultry operations is an economical and commonly used fertilizer in the state of Louisiana. The application of animal waste to pasture lands not only is a source of fertilizer, but also allows for a convenient method of waste disposal. The disposal of animal wastes on land is a potential nonpoint source of water degradation. Water degradation and human health is a major concern when considering the disposal of large quantities of animal waste. The objective of this research was to determine the effect of animal waste application on biological (fecal coliform, Enterobacter spp. and Escherichia coli) and physical/chemical (temperature, pH, nitrate nitrogen, ammonia nitrogen, phosphate, copper, zinc, and sulfate) characteristics of runoff water in experimental plots. The effects of the application of animal waste have been evaluated by utilizing experimental plots and simulated rainfall events. Samples of runoff water were collected and analyzed for fecal coliforms. Fecal coliforms isolated from these samples were identified to the species level. Chemical analysis was performed following standard test protocols. An analysis of temperature, ammonia nitrogen, nitrate nitrogen, iron, copper, phosphate, potassium, sulfate, zinc and bacterial levels was performed following standard test protocols as presented in Standard Methods for the Examination of Water and Wastewater [1]. In the experimental plots, less time was required in the tilled broiler litter plots for the measured chemicals to decrease below the initial pre-treatment levels. A decrease of over 50% was noted between the first and second rainfall events for sulfate levels. This decrease was seen after only four simulated rainfall events in tilled broiler litter plots whereas broiler litter plots required eight simulated rainfall events to show this same type of reduction. A reverse trend was seen in the broiler litter plots and the tilled broiler plots for potassium. Bacteria numbers present after the simulated rainfall events were above 200/100 ml of sample water. It can be concluded that: 1) non-point source pollution has a significant effect on bacterial and nutrients levels in runoff water and in water resources; 2) land application of animal waste for soil fertilization makes a significant contribution to water pollution; 3) the use of tilling can significantly reduce the amount of nutrients available in runoff water. PMID:16705834

  16. Analysis on flood generation processes by means of a continuous simulation model

    NASA Astrophysics Data System (ADS)

    Fiorentino, M.; Gioia, A.; Iacobellis, V.; Manfreda, S.

    2006-03-01

    In the present research, we exploited a continuous hydrological simulation to investigate on key variables responsible of flood peak formation. With this purpose, a distributed hydrological model (DREAM) is used in cascade with a rainfall generator (IRP-Iterated Random Pulse) to simulate a large number of extreme events providing insight into the main controls of flood generation mechanisms. Investigated variables are those used in theoretically derived probability distribution of floods based on the concept of partial contributing area (e.g. Iacobellis and Fiorentino, 2000). The continuous simulation model is used to investigate on the hydrological losses occurring during extreme events, the variability of the source area contributing to the flood peak and its lag-time. Results suggest interesting simplification for the theoretical probability distribution of floods according to the different climatic and geomorfologic environments. The study is applied to two basins located in Southern Italy with different climatic characteristics.

  17. Simulation of SEU Cross-sections using MRED under Conditions of Limited Device Information

    NASA Technical Reports Server (NTRS)

    Lauenstein, J. M.; Reed, R. A.; Weller, R. A.; Mendenhall, M. H.; Warren, K. M.; Pellish, J. A.; Schrimpf, R. D.; Sierawski, B. D.; Massengill, L. W.; Dodd, P. E.; hide

    2007-01-01

    This viewgraph presentation reviews the simulation of Single Event Upset (SEU) cross sections using the membrane electrode assembly (MEA) resistance and electrode diffusion (MRED) tool using "Best guess" assumptions about the process and geometry, and direct ionization, low-energy beam test results. This work will also simulate SEU cross-sections including angular and high energy responses and compare the simulated results with beam test data for the validation of the model. Using MRED, we produced a reasonably accurate upset response model of a low-critical charge SRAM without detailed information about the circuit, device geometry, or fabrication process

  18. Variability of simulants used in recreating stab events.

    PubMed

    Carr, D J; Wainwright, A

    2011-07-15

    Forensic investigators commonly use simulants/backing materials to mount fabrics and/or garments on when recreating damage due to stab events. Such work may be conducted in support of an investigation to connect a particular knife to a stabbing event by comparing the severance morphology obtained in the laboratory to that observed in the incident. There does not appear to have been a comparison of the effect of simulant type on the morphology of severances in fabrics and simulants, nor on the variability of simulants. This work investigates three simulants (pork, gelatine, expanded polystyrene), two knife blades (carving, bread), and how severances in the simulants and an apparel fabric typically used to manufacture T-shirts (single jersey) were affected by (i) simulant type and (ii) blade type. Severances were formed using a laboratory impact apparatus to ensure a consistent impact velocity and hence impact energy independently of the other variables. The impact velocity was chosen so that the force measured was similar to that measured in human performance trials. Force-time and energy-time curves were analysed and severance morphology (y, z directions) investigated. Simulant type and knife type significantly affected the critical forensic measurements of severance length (y direction) in the fabric and 'skin' (Tuftane). The use of EPS resulted in the lowest variability in data, further the severances recorded in both the fabric and Tuftane more accurately reflected the dimensions of the impacting knives. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  19. The sensitivity of EGRET to gamma ray polarization

    NASA Astrophysics Data System (ADS)

    Mattox, John R.

    1990-05-01

    A Monte Carlo simulation shows that EGRET (Energetic Gamma-Ray Experimental Telescope) does not even have sufficient sensitivity to detect 100 percent polarized gamma-rays. This is confirmed by analysis of calibration data. A Monte Carlo study shows that the sensitivity of EGRET to polarization peaks around 100 MeV. However, more than 10 5 gamma-ray events with 100 percent polarization would be required for a 3 sigma significance detection - more than available from calibration, and probably more than will result from a single score source during flight. A drift chamber gamma ray telescope under development (Hunter and Cuddapah 1989) will offer better sensitivity to polarization. The lateral position uncertainty will be improved by an order of magnitude. Also, if pair production occurs in the drift chamber gas (xenon at 2 bar) instead of tantalum foils, the effects of multiple Coulomb scattering will be reduced.

  20. Application of thin-film breakdown counters for characterization of neutron field of the VESUVIO instrument at the ISIS spallation source

    NASA Astrophysics Data System (ADS)

    Smirnov, A. N.; Pietropaolo, A.; Prokofiev, A. V.; Rodionova, E. E.; Frost, C. D.; Ansell, S.; Schooneveld, E. M.; Gorini, G.

    2012-09-01

    The high-energy neutron field of the VESUVIO instrument at the ISIS facility has been characterized using the technique of thin-film breakdown counters (TFBC). The technique utilizes neutron-induced fission reactions of natU and 209Bi with detection of fission fragments by TFBCs. Experimentally determined count rates of the fragments are ≈50% higher than those calculated using spectral neutron flux simulated with the MCNPX code. This work is a part of the project to develop ChipIr, a new dedicated facility for the accelerated testing of electronic components and systems for neutron-induced single event effects in the new Target Station 2 at ISIS. The TFBC technique has shown to be applicable for on-line monitoring of the neutron flux in the neutron energy range 1-800 MeV at the position of the device under test (DUT).

  1. Room temperature solid-state quantum emitters in the telecom range

    PubMed Central

    Bodrog, Zoltán; Adamo, Giorgio; Gali, Adam

    2018-01-01

    On-demand, single-photon emitters (SPEs) play a key role across a broad range of quantum technologies. In quantum networks and quantum key distribution protocols, where photons are used as flying qubits, telecom wavelength operation is preferred because of the reduced fiber loss. However, despite the tremendous efforts to develop various triggered SPE platforms, a robust source of triggered SPEs operating at room temperature and the telecom wavelength is still missing. We report a triggered, optically stable, room temperature solid-state SPE operating at telecom wavelengths. The emitters exhibit high photon purity (~5% multiphoton events) and a record-high brightness of ~1.5 MHz. The emission is attributed to localized defects in a gallium nitride (GaN) crystal. The high-performance SPEs embedded in a technologically mature semiconductor are promising for on-chip quantum simulators and practical quantum communication technologies. PMID:29670945

  2. The Stochastic Parcel Model: A deterministic parameterization of stochastically entraining convection

    DOE PAGES

    Romps, David M.

    2016-03-01

    Convective entrainment is a process that is poorly represented in existing convective parameterizations. By many estimates, convective entrainment is the leading source of error in global climate models. As a potential remedy, an Eulerian implementation of the Stochastic Parcel Model (SPM) is presented here as a convective parameterization that treats entrainment in a physically realistic and computationally efficient way. Drawing on evidence that convecting clouds comprise air parcels subject to Poisson-process entrainment events, the SPM calculates the deterministic limit of an infinite number of such parcels. For computational efficiency, the SPM groups parcels at each height by their purity, whichmore » is a measure of their total entrainment up to that height. This reduces the calculation of convective fluxes to a sequence of matrix multiplications. The SPM is implemented in a single-column model and compared with a large-eddy simulation of deep convection.« less

  3. Capturing flood-to-drought transitions in regional climate model simulations

    NASA Astrophysics Data System (ADS)

    Anders, Ivonne; Haslinger, Klaus; Hofstätter, Michael; Salzmann, Manuela; Resch, Gernot

    2017-04-01

    In previous studies atmospheric cyclones have been investigated in terms of related precipitation extremes in Central Europe. Mediterranean (Vb-like) cyclones are of special relevance as they are frequently related to high atmospheric moisture fluxes leading to floods and landslides in the Alpine region. Another focus in this area is on droughts, affecting soil moisture and surface and sub-surface runoff as well. Such events develop differently depending on available pre-saturation of water in the soil. In a first step we investigated two time periods which encompass a flood event and a subsequent drought on very different time scales, one long lasting transition (2002/2003) and a rather short one between May and August 2013. In a second step we extended the investigation to the long time period 1950-2016. We focused on high spatial and temporal scales and assessed the currently achievable accuracy in the simulation of the Vb-events on one hand and following drought events on the other hand. The state-of-the-art regional climate model CCLM is applied in hindcast-mode simulating the single events described above, but also the time from 1948 to 2016 to evaluate the results from the short runs to be valid for the long time period. Besides the conventional forcing of the regional climate model at its lateral boundaries, a spectral nudging technique is applied. The simulations covering the European domain have been varied systematically different model parameters. The resulting precipitation amounts have been compared to E-OBS gridded European precipitation data set and a recent high spatially resolved precipitation data set for Austria (GPARD-6). For the drought events the Standardized Precipitation Evapotranspiration Index (SPEI), soil moisture and runoff has been investigated. Varying the spectral nudging setup helps us to understand the 3D-processes during these events, but also to identify model deficiencies. To improve the simulation of such events in the past, improves also the ability to assess a climate change signal in the recent and far future.

  4. Earthquake Monitoring with the MyShake Global Smartphone Seismic Network

    NASA Astrophysics Data System (ADS)

    Inbal, A.; Kong, Q.; Allen, R. M.; Savran, W. H.

    2017-12-01

    Smartphone arrays have the potential for significantly improving seismic monitoring in sparsely instrumented urban areas. This approach benefits from the dense spatial coverage of users, as well as from communication and computational capabilities built into smartphones, which facilitate big seismic data transfer and analysis. Advantages in data acquisition with smartphones trade-off with factors such as the low-quality sensors installed in phones, high noise levels, and strong network heterogeneity, all of which limit effective seismic monitoring. Here we utilize network and array-processing schemes to asses event detectability with the MyShake global smartphone network. We examine the benefits of using this network in either triggered or continuous modes of operation. A global database of ground motions measured on stationary phones triggered by M2-6 events is used to establish detection probabilities. We find that the probability of detecting an M=3 event with a single phone located <10 km from the epicenter exceeds 70%. Due to the sensor's self-noise, smaller magnitude events at short epicentral distances are very difficult to detect. To increase the signal-to-noise ratio, we employ array back-projection techniques on continuous data recorded by thousands of phones. In this class of methods, the array is used as a spatial filter that suppresses signals emitted from shallow noise sources. Filtered traces are stacked to further enhance seismic signals from deep sources. We benchmark our technique against traditional location algorithms using recordings from California, a region with large MyShake user database. We find that locations derived from back-projection images of M 3 events recorded by >20 nearby phones closely match the regional catalog locations. We use simulated broadband seismic data to examine how location uncertainties vary with user distribution and noise levels. To this end, we have developed an empirical noise model for the metropolitan Los-Angeles (LA) area. We find that densities larger than 100 stationary phones/km2 are required to accurately locate M 2 events in the LA basin. Given the projected MyShake user distribution, that condition may be met within the next few years.

  5. Geometry and Pore Pressure Shape the Pattern of the Tectonic Tremors Activity on the Deep San Andreas Fault with Periodic, Period-Multiplying Recurrence Intervals

    NASA Astrophysics Data System (ADS)

    Mele Veedu, D.; Barbot, S.

    2014-12-01

    A never before recorded pattern of periodic, chaotic, and doubled, earthquake recurrence intervals was detected in the sequence of deep tectonic tremors of the Parkfield segment of the San Andreas Fault (Shelly, 2010). These observations may be the most puzzling seismological observations of the last decade: The pattern was regularly oscillating with a period doubling of 3 and 6 days from mid-2003 until it was disrupted by the 2004 Mw 6.0 Parkfield earthquake. But by the end of 2007, the previous pattern resumed. Here, we assume that the complex dynamics of the tremors is caused by slip on a single asperity on the San Andreas Fault with homogeneous friction properties. We developed a three-dimensional model based on the rate-and-state friction law with a single patch and simulated fault slip during all stages of the earthquake cycle using the boundary integral method of Lapusta & Liu (2009). We find that homogeneous penny-shaped asperities cannot induce the observed period doubling, and that the geometry itself of the velocity-weakening asperity is critical in enabling the characteristic behavior of the Parkfield tremors. We also find that the system is sensitive to perturbations in pore pressure, such that the ones induced by the 2004 Parkfield earthquake are sufficient to dramatically alter the dynamics of the tremors for two years, as observed by Shelly (2010). An important finding is that tremor magnitude is amplified more by macroscopic slip duration on the source asperity than by slip amplitude, indicative of a time-dependent process for the breakage of micro-asperities that leads to seismic emissions. Our simulated event duration is in the range of 25 to 150 seconds, closely comparable to the event duration of a typical Parkfield tectonic tremor. Our simulations reproduce the unique observations of the Parkfield tremor activity. This study vividly illustrates the critical role of geometry in shaping the dynamics of fault slip evolution on a seismogenic fault.

  6. Identifying the most hazardous synoptic meteorological conditions for Winter UK PM10 exceedences

    NASA Astrophysics Data System (ADS)

    Webber, Chris; Dacre, Helen; Collins, Bill; Masato, Giacomo

    2016-04-01

    Summary We investigate the relationship between synoptic scale meteorological variability and local scale pollution concentrations within the UK. Synoptic conditions representative of atmospheric blocking highlighted significant increases in UK PM10 concentration ([PM10]), with the probability of exceeding harmful [PM10] limits also increased. Once relationships had been diagnosed, The Met Office Unified Model (UM) was used to replicate these relationships, using idealised source regions of PM10. This helped to determine the PM10 source regions most influential throughout UK PM10 exceedance events and to test whether the model was capable of capturing the relationships between UK PM10 and atmospheric blocking. Finally, a time slice simulation for 2050-2060 helped to answer the question whether PM10 exceedance events are more likely to occur within a changing climate. Introduction Atmospheric blocking events are well understood to lead to conditions, conducive to pollution events within the UK. Literature shows that synoptic conditions with the ability to deflect the Northwest Atlantic storm track from the UK, often lead to the highest UK pollution concentrations. Rossby wave breaking (RWB) has been identified as a mechanism, which results in atmospheric blocking and its relationship with UK [PM10] is explored using metrics designed in Masato, et al., 2013. Climate simulations facilitated by the Met Office UM, enable these relationships between RWB and PM10 to be found within the model. Subsequently the frequency of events that lead to hazardous PM10 concentrations ([PM10]) in a future climate, can be determined, within a climate simulation. An understanding of the impact, meteorology has on UK [PM10] within a changing climate, will help inform policy makers, regarding the importance of limiting PM10 emissions, ensuring safe air quality in the future. Methodology and Results Three Blocking metrics were used to subset RWB into four categories. These RWB categories were all shown to increase UK [PM10] and to increase the probability of exceeding a UK [PM10] threshold, when they occurred within constrained regions. Further analysis highlighted that Omega Block events lead to the greatest probability of exceeding hazardous UK [PM10] limits. These events facilitated the advection of European PM10, while also providing stagnant conditions over the UK, facilitating PM10 accumulation. The Met Office UM was used and nudged to ERA-Interim Reanalysis wind and temperature fields, to replicate the relationships found using observed UK [PM10]. Inert tracers were implemented into the model to replicate UK PM10 source regions throughout Europe. The modelled tracers were seen to correlate well with observed [PM10] and Figure 1 highlights the correlations between a RWB metric and observed (a) and modelled (b) [PM10]. A further free running model simulation highlighted the deficiency of the Met Office UM in capturing RWB frequency, with a reduction over the Northwest Atlantic/ European region. A final time slice simulation was undertaken for the period 2050-2060, using Representative Concentration Pathway 8.5, which attempted to determine the change in frequency of UK PM10 exceedance events, due to changing meteorology, in a future climate. Conclusions RWB has been shown to increase UK [PM10] and to lead to greater probabilities of exceeding a harmful [PM10] threshold. Omega block events have been determined the most hazardous RWB subset and this is due to a combination of European advection and UK stagnation. Simulations within the Met Office UM were undertaken and the relationships seen between observed UK [PM10] and RWB were replicated within the model, using inert tracers. Finally, time slice simulations were undertaken, determining the change in frequency of UK [PM10] exceedance events within a changing climate. References Masato, G., Hoskins, B. J., Woolings, T., 2013; Wave-breaking Characteristics of Northern Hemisphere Winter Blocking: A Two-Dimensional Approach. J. Climate, 26, 4535-4549.

  7. Sensitivity of the coastal tsunami simulation to the complexity of the 2011 Tohoku earthquake source model

    NASA Astrophysics Data System (ADS)

    Monnier, Angélique; Loevenbruck, Anne; Gailler, Audrey; Hébert, Hélène

    2016-04-01

    The 11 March 2011 Tohoku-Oki event, whether earthquake or tsunami, is exceptionally well documented. A wide range of onshore and offshore data has been recorded from seismic, geodetic, ocean-bottom pressure and sea level sensors. Along with these numerous observations, advance in inversion technique and computing facilities have led to many source studies. Rupture parameters inversion such as slip distribution and rupture history permit to estimate the complex coseismic seafloor deformation. From the numerous published seismic source studies, the most relevant coseismic source models are tested. The comparison of the predicted signals generated using both static and cinematic ruptures to the offshore and coastal measurements help determine which source model should be used to obtain the more consistent coastal tsunami simulations. This work is funded by the TANDEM project, reference ANR-11-RSNR-0023-01 of the French Programme Investissements d'Avenir (PIA 2014-2018).

  8. Adaptive constructive processes and memory accuracy: Consequences of counterfactual simulations in young and older adults

    PubMed Central

    Gerlach, Kathy D.; Dornblaser, David W.; Schacter, Daniel L.

    2013-01-01

    People frequently engage in counterfactual thinking: mental simulations of alternative outcomes to past events. Like simulations of future events, counterfactual simulations serve adaptive functions. However, future simulation can also result in various kinds of distortions and has thus been characterized as an adaptive constructive process. Here we approach counterfactual thinking as such and examine whether it can distort memory for actual events. In Experiments 1a/b, young and older adults imagined themselves experiencing different scenarios. Participants then imagined the same scenario again, engaged in no further simulation of a scenario, or imagined a counterfactual outcome. On a subsequent recognition test, participants were more likely to make false alarms to counterfactual lures than novel scenarios. Older adults were more prone to these memory errors than younger adults. In Experiment 2, younger and older participants selected and performed different actions, then recalled performing some of those actions, imagined performing alternative actions to some of the selected actions, and did not imagine others. Participants, especially older adults, were more likely to falsely remember counterfactual actions than novel actions as previously performed. The findings suggest that counterfactual thinking can cause source confusion based on internally generated misinformation, consistent with its characterization as an adaptive constructive process. PMID:23560477

  9. Adaptive constructive processes and memory accuracy: consequences of counterfactual simulations in young and older adults.

    PubMed

    Gerlach, Kathy D; Dornblaser, David W; Schacter, Daniel L

    2014-01-01

    People frequently engage in counterfactual thinking: mental simulations of alternative outcomes to past events. Like simulations of future events, counterfactual simulations serve adaptive functions. However, future simulation can also result in various kinds of distortions and has thus been characterised as an adaptive constructive process. Here we approach counterfactual thinking as such and examine whether it can distort memory for actual events. In Experiments 1a/b young and older adults imagined themselves experiencing different scenarios. Participants then imagined the same scenario again, engaged in no further simulation of a scenario, or imagined a counterfactual outcome. On a subsequent recognition test participants were more likely to make false alarms to counterfactual lures than novel scenarios. Older adults were more prone to these memory errors than younger adults. In Experiment 2 younger and older participants selected and performed different actions, then recalled performing some of those actions, imagined performing alternative actions to some of the selected actions, and did not imagine others. Participants, especially older adults, were more likely to falsely remember counterfactual actions than novel actions as previously performed. The findings suggest that counterfactual thinking can cause source confusion based on internally generated misinformation, consistent with its characterisation as an adaptive constructive process.

  10. Simulating Chemical-Induced Injury Using Virtual Hepatic Tissues

    EPA Science Inventory

    Chemical-induced liver injury involves a dynamic sequence of events that span multiple levels of biological organization. Current methods for testing the toxicity of a single chemical can cost millions of dollars, take up to two years and sacrifice thousands of animals. It is dif...

  11. Resolving source mechanisms of microseismic swarms induced by solution mining

    NASA Astrophysics Data System (ADS)

    Kinscher, J.; Cesca, S.; Bernard, P.; Contrucci, I.; Mangeney, A.; Piguet, J. P.; Bigarré, P.

    2016-07-01

    In order to improve our understanding of hazardous underground cavities, the development and collapse of a ˜200 m wide salt solution mining cavity was seismically monitored in the Lorraine basin in northeastern France. The microseismic events show a swarm-like behaviour, with clustering sequences lasting from seconds to days, and distinct spatiotemporal migration. Observed microseismic signals are interpreted as the result of detachment and block breakage processes occurring at the cavity roof. Body wave amplitude patterns indicated the presence of relatively stable source mechanisms, either associated with dip-slip and/or tensile faulting. Signal overlaps during swarm activity due to short interevent times, the high-frequency geophone recordings and the limited network station coverage often limit the application of classical source analysis techniques. To overcome these shortcomings, we investigated the source mechanisms through different procedures including modelling of observed and synthetic waveforms and amplitude spectra of some well-located events, as well as modelling of peak-to-peak amplitude ratios for the majority of the detected events. We extended the latter approach to infer the average source mechanism of many swarming events at once, using multiple events recorded at a single three component station. This methodology is applied here for the first time and represents a useful tool for source studies of seismic swarms and seismicity clusters. The results obtained with different methods are consistent and indicate that the source mechanisms for at least 50 per cent of the microseismic events are remarkably stable, with a predominant thrust faulting regime with faults similarly oriented, striking NW-SE and dipping around 35°-55°. This dominance of consistent source mechanisms might be related to the presence of a preferential direction of pre-existing crack or fault structures. As an interesting byproduct, we demonstrate, for the first time directly on seismic data, that the source radiation pattern significantly controls the detection capability of a seismic station and network.

  12. Shortcomings in ground testing, environment simulations, and performance predictions for space applications

    NASA Technical Reports Server (NTRS)

    Stassinopoulos, E. G.; Brucker, G. J.

    1992-01-01

    This paper addresses the issues involved in radiation testing of devices and subsystems to obtain the data that are required to predict the performance and survivability of satellite systems for extended missions in space. The problems associated with space environmental simulations, or the lack thereof, in experiments intended to produce information to describe the degradation and behavior of parts and systems are discussed. Several types of radiation effects in semiconductor components are presented, as for example: ionization dose effects, heavy ion and proton induced Single Event Upsets (SEUs), and Single Event Transient Upsets (SETUs). Examples and illustrations of data relating to these ground testing issues are provided. The primary objective of this presentation is to alert the reader to the shortcomings, pitfalls, variabilities, and uncertainties in acquiring information to logically design electronic subsystems for use in satellites or space stations with long mission lifetimes, and to point out the weaknesses and deficiencies in the methods and procedures by which that information is obtained.

  13. Single-Event Transient Response of Comparator Pre-Amplifiers in a Complementary SiGe Technology

    NASA Astrophysics Data System (ADS)

    Ildefonso, Adrian; Lourenco, Nelson E.; Fleetwood, Zachary E.; Wachter, Mason T.; Tzintzarov, George N.; Cardoso, Adilson S.; Roche, Nicolas J.-H.; Khachatrian, Ani; McMorrow, Dale; Buchner, Stephen P.; Warner, Jeffrey H.; Paki, Pauline; Kaynak, Mehmet; Tillack, Bernd; Cressler, John D.

    2017-01-01

    The single-event transient (SET) response of the pre-amplification stage of two latched comparators designed using either npn or pnp silicon-germanium heterojunction bipolar transistors (SiGe HBTs) is investigated via two-photon absorption (TPA) carrier injection and mixed-mode TCAD simulations. Experimental data and TCAD simulations showed an improved SET response for the pnp comparator circuit. 2-D raster scans revealed that the devices in the pnp circuit exhibit a reduction in sensitive area of up to 80% compared to their npn counterparts. In addition, by sweeping the input voltage, the sensitive operating region with respect to SETs was determined. By establishing a figure-of-merit, relating the transient peaks and input voltage polarities, the pnp device was determined to have a 21.4% improved response with respect to input voltage. This study has shown that using pnp devices is an effective way to mitigate SETs, and could enable further radiation-hardening-by-design techniques.

  14. Itzï (version 17.1): an open-source, distributed GIS model for dynamic flood simulation

    NASA Astrophysics Data System (ADS)

    Guillaume Courty, Laurent; Pedrozo-Acuña, Adrián; Bates, Paul David

    2017-05-01

    Worldwide, floods are acknowledged as one of the most destructive hazards. In human-dominated environments, their negative impacts are ascribed not only to the increase in frequency and intensity of floods but also to a strong feedback between the hydrological cycle and anthropogenic development. In order to advance a more comprehensive understanding of this complex interaction, this paper presents the development of a new open-source tool named Itzï that enables the 2-D numerical modelling of rainfall-runoff processes and surface flows integrated with the open-source geographic information system (GIS) software known as GRASS. Therefore, it takes advantage of the ability given by GIS environments to handle datasets with variations in both temporal and spatial resolutions. Furthermore, the presented numerical tool can handle datasets from different sources with varied spatial resolutions, facilitating the preparation and management of input and forcing data. This ability reduces the preprocessing time usually required by other models. Itzï uses a simplified form of the shallow water equations, the damped partial inertia equation, for the resolution of surface flows, and the Green-Ampt model for the infiltration. The source code is now publicly available online, along with complete documentation. The numerical model is verified against three different tests cases: firstly, a comparison with an analytic solution of the shallow water equations is introduced; secondly, a hypothetical flooding event in an urban area is implemented, where results are compared to those from an established model using a similar approach; and lastly, the reproduction of a real inundation event that occurred in the city of Kingston upon Hull, UK, in June 2007, is presented. The numerical approach proved its ability at reproducing the analytic and synthetic test cases. Moreover, simulation results of the real flood event showed its suitability at identifying areas affected by flooding, which were verified against those recorded after the event by local authorities.

  15. Serial Founder Effects During Range Expansion: A Spatial Analog of Genetic Drift

    PubMed Central

    Slatkin, Montgomery; Excoffier, Laurent

    2012-01-01

    Range expansions cause a series of founder events. We show that, in a one-dimensional habitat, these founder events are the spatial analog of genetic drift in a randomly mating population. The spatial series of allele frequencies created by successive founder events is equivalent to the time series of allele frequencies in a population of effective size ke, the effective number of founders. We derive an expression for ke in a discrete-population model that allows for local population growth and migration among established populations. If there is selection, the net effect is determined approximately by the product of the selection coefficients and the number of generations between successive founding events. We use the model of a single population to compute analytically several quantities for an allele present in the source population: (i) the probability that it survives the series of colonization events, (ii) the probability that it reaches a specified threshold frequency in the last population, and (iii) the mean and variance of the frequencies in each population. We show that the analytic theory provides a good approximation to simulation results. A consequence of our approximation is that the average heterozygosity of neutral alleles decreases by a factor of 1 – 1/(2ke) in each new population. Therefore, the population genetic consequences of surfing can be predicted approximately by the effective number of founders and the effective selection coefficients, even in the presence of migration among populations. We also show that our analytic results are applicable to a model of range expansion in a continuously distributed population. PMID:22367031

  16. Serial founder effects during range expansion: a spatial analog of genetic drift.

    PubMed

    Slatkin, Montgomery; Excoffier, Laurent

    2012-05-01

    Range expansions cause a series of founder events. We show that, in a one-dimensional habitat, these founder events are the spatial analog of genetic drift in a randomly mating population. The spatial series of allele frequencies created by successive founder events is equivalent to the time series of allele frequencies in a population of effective size ke, the effective number of founders. We derive an expression for ke in a discrete-population model that allows for local population growth and migration among established populations. If there is selection, the net effect is determined approximately by the product of the selection coefficients and the number of generations between successive founding events. We use the model of a single population to compute analytically several quantities for an allele present in the source population: (i) the probability that it survives the series of colonization events, (ii) the probability that it reaches a specified threshold frequency in the last population, and (iii) the mean and variance of the frequencies in each population. We show that the analytic theory provides a good approximation to simulation results. A consequence of our approximation is that the average heterozygosity of neutral alleles decreases by a factor of 1-1/(2ke) in each new population. Therefore, the population genetic consequences of surfing can be predicted approximately by the effective number of founders and the effective selection coefficients, even in the presence of migration among populations. We also show that our analytic results are applicable to a model of range expansion in a continuously distributed population.

  17. Simulation and source identification of X-ray contrast media in the water cycle of Berlin.

    PubMed

    Knodel, J; Geissen, S-U; Broll, J; Dünnbier, U

    2011-11-01

    This article describes the development of a model to simulate the fate of iodinated X-ray contrast media (XRC) in the water cycle of the German capital, Berlin. It also handles data uncertainties concerning the different amounts and sources of input for XRC via source densities in single districts for the XRC usage by inhabitants, hospitals, and radiologists. As well, different degradation rates for the behavior of the adsorbable organic iodine (AOI) were investigated in single water compartments. The introduced model consists of mass balances and includes, in addition to naturally branched bodies of water, the water distribution network between waterways and wastewater treatment plants, which are coupled to natural surface waters at numerous points. Scenarios were calculated according to the data uncertainties that were statistically evaluated to identify the scenario with the highest agreement among the provided measurement data. The simulation of X-ray contrast media in the water cycle of Berlin showed that medical institutions have to be considered as point sources for congested urban areas due to their high levels of X-ray contrast media emission. The calculations identified hospitals, represented by their capacity (number of hospital beds), as the most relevant point sources, while the inhabitants served as important diffusive sources. Deployed for almost inert substances like contrast media, the model can be used for qualitative statements and, therefore, as a decision-support tool. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Adapting the serial Alpgen parton-interaction generator to simulate LHC collisions on millions of parallel threads

    NASA Astrophysics Data System (ADS)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.

    2017-01-01

    As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application and the performance that was achieved.

  19. Dissociation of single-strand DNA: single-walled carbon nanotube hybrids by Watson-Crick base-pairing.

    PubMed

    Jung, Seungwon; Cha, Misun; Park, Jiyong; Jeong, Namjo; Kim, Gunn; Park, Changwon; Ihm, Jisoon; Lee, Junghoon

    2010-08-18

    It has been known that single-strand DNA wraps around a single-walled carbon nanotube (SWNT) by pi-stacking. In this paper it is demonstrated that such DNA is dissociated from the SWNT by Watson-Crick base-pairing with a complementary sequence. Measurement of field effect transistor characteristics indicates a shift of the electrical properties as a result of this "unwrapping" event. We further confirm the suggested process through Raman spectroscopy and gel electrophoresis. Experimental results are verified in view of atomistic mechanisms with molecular dynamics simulations and binding energy analyses.

  20. Passenger rail security, planning, and resilience: application of network, plume, and economic simulation models as decision support tools.

    PubMed

    Greenberg, Michael; Lioy, Paul; Ozbas, Birnur; Mantell, Nancy; Isukapalli, Sastry; Lahr, Michael; Altiok, Tayfur; Bober, Joseph; Lacy, Clifton; Lowrie, Karen; Mayer, Henry; Rovito, Jennifer

    2013-11-01

    We built three simulation models that can assist rail transit planners and operators to evaluate high and low probability rail-centered hazard events that could lead to serious consequences for rail-centered networks and their surrounding regions. Our key objective is to provide these models to users who, through planning with these models, can prevent events or more effectively react to them. The first of the three models is an industrial systems simulation tool that closely replicates rail passenger traffic flows between New York Penn Station and Trenton, New Jersey. Second, we built and used a line source plume model to trace chemical plumes released by a slow-moving freight train that could impact rail passengers, as well as people in surrounding areas. Third, we crafted an economic simulation model that estimates the regional economic consequences of a variety of rail-related hazard events through the year 2020. Each model can work independently of the others. However, used together they help provide a coherent story about what could happen and set the stage for planning that should make rail-centered transport systems more resistant and resilient to hazard events. We highlight the limitations and opportunities presented by using these models individually or in sequence. © 2013 Society for Risk Analysis.

  1. Passenger Rail Security, Planning, and Resilience: Application of Network, Plume, and Economic Simulation Models as Decision Support Tools

    PubMed Central

    Greenberg, Michael; Lioy, Paul; Ozbas, Birnur; Mantell, Nancy; Isukapalli, Sastry; Lahr, Michael; Altiok, Tayfur; Bober, Joseph; Lacy, Clifton; Lowrie, Karen; Mayer, Henry; Rovito, Jennifer

    2014-01-01

    We built three simulation models that can assist rail transit planners and operators to evaluate high and low probability rail-centered hazard events that could lead to serious consequences for rail-centered networks and their surrounding regions. Our key objective is to provide these models to users who, through planning with these models, can prevent events or more effectively react to them. The first of the three models is an industrial systems simulation tool that closely replicates rail passenger traffic flows between New York Penn Station and Trenton, New Jersey. Second, we built and used a line source plume model to trace chemical plumes released by a slow-moving freight train that could impact rail passengers, as well as people in surrounding areas. Third, we crafted an economic simulation model that estimates the regional economic consequences of a variety of rail-related hazard events through the year 2020. Each model can work independently of the others. However, used together they help provide a coherent story about what could happen and set the stage for planning that should make rail-centered transport systems more resistant and resilient to hazard events. We highlight the limitations and opportunities presented by using these models individually or in sequence. PMID:23718133

  2. Ionization Electron Signal Processing in Single Phase LArTPCs II. Data/Simulation Comparison and Performance in MicroBooNE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, C.; et al.

    The single-phase liquid argon time projection chamber (LArTPC) provides a large amount of detailed information in the form of fine-grained drifted ionization charge from particle traces. To fully utilize this information, the deposited charge must be accurately extracted from the raw digitized waveforms via a robust signal processing chain. Enabled by the ultra-low noise levels associated with cryogenic electronics in the MicroBooNE detector, the precise extraction of ionization charge from the induction wire planes in a single-phase LArTPC is qualitatively demonstrated on MicroBooNE data with event display images, and quantitatively demonstrated via waveform-level and track-level metrics. Improved performance of inductionmore » plane calorimetry is demonstrated through the agreement of extracted ionization charge measurements across different wire planes for various event topologies. In addition to the comprehensive waveform-level comparison of data and simulation, a calibration of the cryogenic electronics response is presented and solutions to various MicroBooNE-specific TPC issues are discussed. This work presents an important improvement in LArTPC signal processing, the foundation of reconstruction and therefore physics analyses in MicroBooNE.« less

  3. Measurement of the single π0 production rate in neutral current neutrino interactions on water

    NASA Astrophysics Data System (ADS)

    Abe, K.; Amey, J.; Andreopoulos, C.; Antonova, M.; Aoki, S.; Ariga, A.; Ashida, Y.; Assylbekov, S.; Autiero, D.; Ban, S.; Barbi, M.; Barker, G. J.; Barr, G.; Barry, C.; Bartet-Friburg, P.; Batkiewicz, M.; Berardi, V.; Berkman, S.; Bhadra, S.; Bienstock, S.; Blondel, A.; Bolognesi, S.; Bordoni, S.; Boyd, S. B.; Brailsford, D.; Bravar, A.; Bronner, C.; Buizza Avanzini, M.; Calland, R. G.; Campbell, T.; Cao, S.; Cartwright, S. L.; Castillo, R.; Catanesi, M. G.; Cervera, A.; Chappell, A.; Checchia, C.; Cherdack, D.; Chikuma, N.; Christodoulou, G.; Clifton, A.; Coleman, J.; Collazuol, G.; Coplowe, D.; Cremonesi, L.; Cudd, A.; Dabrowska, A.; De Rosa, G.; Dealtry, T.; Denner, P. F.; Dennis, S. R.; Densham, C.; Dewhurst, D.; Di Lodovico, F.; Di Luise, S.; Dolan, S.; Drapier, O.; Duffy, K. E.; Dumarchez, J.; Dunkman, M.; Dunne, P.; Dziewiecki, M.; Emery-Schrenk, S.; Ereditato, A.; Feusels, T.; Finch, A. J.; Fiorentini, G. A.; Friend, M.; Fujii, Y.; Fukuda, D.; Fukuda, Y.; Furmanski, A. P.; Galymov, V.; Garcia, A.; Giffin, S. G.; Giganti, C.; Gilje, K.; Gizzarelli, F.; Golan, T.; Gonin, M.; Grant, N.; Hadley, D. R.; Haegel, L.; Haigh, J. T.; Hamilton, P.; Hansen, D.; Harada, J.; Hara, T.; Hartz, M.; Hasegawa, T.; Hastings, N. C.; Hayashino, T.; Hayato, Y.; Helmer, R. L.; Hierholzer, M.; Hillairet, A.; Himmel, A.; Hiraki, T.; Hiramoto, A.; Hirota, S.; Hogan, M.; Holeczek, J.; Hosomi, F.; Huang, K.; Ichikawa, A. K.; Ieki, K.; Ikeda, M.; Imber, J.; Insler, J.; Intonti, R. A.; Irvine, T. J.; Ishida, T.; Ishii, T.; Iwai, E.; Iwamoto, K.; Izmaylov, A.; Jacob, A.; Jamieson, B.; Jiang, M.; Johnson, S.; Jo, J. H.; Jonsson, P.; Jung, C. K.; Kabirnezhad, M.; Kaboth, A. C.; Kajita, T.; Kakuno, H.; Kameda, J.; Karlen, D.; Karpikov, I.; Katori, T.; Kearns, E.; Khabibullin, M.; Khotjantsev, A.; Kielczewska, D.; Kikawa, T.; Kim, H.; Kim, J.; King, S.; Kisiel, J.; Knight, A.; Knox, A.; Kobayashi, T.; Koch, L.; Koga, T.; Koller, P. P.; Konaka, A.; Kondo, K.; Kopylov, A.; Kormos, L. L.; Korzenev, A.; Koshio, Y.; Kowalik, K.; Kropp, W.; Kudenko, Y.; Kurjata, R.; Kutter, T.; Lagoda, J.; Lamont, I.; Lamoureux, M.; Larkin, E.; Lasorak, P.; Laveder, M.; Lawe, M.; Lazos, M.; Licciardi, M.; Lindner, T.; Liptak, Z. J.; Litchfield, R. P.; Li, X.; Longhin, A.; Lopez, J. P.; Lou, T.; Ludovici, L.; Lu, X.; Magaletti, L.; Mahn, K.; Malek, M.; Manly, S.; Maret, L.; Marino, A. D.; Marteau, J.; Martin, J. F.; Martins, P.; Martynenko, S.; Maruyama, T.; Matveev, V.; Mavrokoridis, K.; Ma, W. Y.; Mazzucato, E.; McCarthy, M.; McCauley, N.; McFarland, K. S.; McGrew, C.; Mefodiev, A.; Metelko, C.; Mezzetto, M.; Mijakowski, P.; Minamino, A.; Mineev, O.; Mine, S.; Missert, A.; Miura, M.; Moriyama, S.; Morrison, J.; Mueller, Th. A.; Murphy, S.; Myslik, J.; Nakadaira, T.; Nakahata, M.; Nakamura, K. G.; Nakamura, K.; Nakamura, K. D.; Nakanishi, Y.; Nakayama, S.; Nakaya, T.; Nakayoshi, K.; Nantais, C.; Nielsen, C.; Nirkko, M.; Nishikawa, K.; Nishimura, Y.; Novella, P.; Nowak, J.; O'Keeffe, H. M.; Ohta, R.; Okumura, K.; Okusawa, T.; Oryszczak, W.; Oser, S. M.; Ovsyannikova, T.; Owen, R. A.; Oyama, Y.; Palladino, V.; Palomino, J. L.; Paolone, V.; Patel, N. D.; Paudyal, P.; Pavin, M.; Payne, D.; Perkin, J. D.; Petrov, Y.; Pickard, L.; Pickering, L.; Pinzon Guerra, E. S.; Pistillo, C.; Popov, B.; Posiadala-Zezula, M.; Poutissou, J.-M.; Poutissou, R.; Pritchard, A.; Przewlocki, P.; Quilain, B.; Radermacher, T.; Radicioni, E.; Ratoff, P. N.; Ravonel, M.; Rayner, M. A.; Redij, A.; Reinherz-Aronis, E.; Riccio, C.; Rojas, P.; Rondio, E.; Rossi, B.; Roth, S.; Rubbia, A.; Ruggeri, A. C.; Rychter, A.; Sacco, R.; Sakashita, K.; Sánchez, F.; Sato, F.; Scantamburlo, E.; Scholberg, K.; Schwehr, J.; Scott, M.; Seiya, Y.; Sekiguchi, T.; Sekiya, H.; Sgalaberna, D.; Shah, R.; Shaikhiev, A.; Shaker, F.; Shaw, D.; Shiozawa, M.; Shirahige, T.; Short, S.; Smy, M.; Sobczyk, J. T.; Sobel, H.; Sorel, M.; Southwell, L.; Stamoulis, P.; Steinmann, J.; Stewart, T.; Stowell, P.; Suda, Y.; Suvorov, S.; Suzuki, A.; Suzuki, K.; Suzuki, S. Y.; Suzuki, Y.; Tacik, R.; Tada, M.; Takahashi, S.; Takeda, A.; Takeuchi, Y.; Tamura, R.; Tanaka, H. K.; Tanaka, H. A.; Terhorst, D.; Terri, R.; Thakore, T.; Thompson, L. F.; Tobayama, S.; Toki, W.; Tomura, T.; Touramanis, C.; Tsukamoto, T.; Tzanov, M.; Uchida, Y.; Vacheret, A.; Vagins, M.; Vallari, Z.; Vasseur, G.; Vilela, C.; Vladisavljevic, T.; Wachala, T.; Wakamatsu, K.; Walter, C. W.; Wark, D.; Warzycha, W.; Wascko, M. O.; Weber, A.; Wendell, R.; Wilkes, R. J.; Wilking, M. J.; Wilkinson, C.; Wilson, J. R.; Wilson, R. J.; Wret, C.; Yamada, Y.; Yamamoto, K.; Yamamoto, M.; Yanagisawa, C.; Yano, T.; Yen, S.; Yershov, N.; Yokoyama, M.; Yoo, J.; Yoshida, K.; Yuan, T.; Yu, M.; Zalewska, A.; Zalipska, J.; Zambelli, L.; Zaremba, K.; Ziembicki, M.; Zimmerman, E. D.; Zito, M.; Żmuda, J.; T2K Collaboration

    2018-02-01

    The single π0 production rate in neutral current neutrino interactions on water in a neutrino beam with a peak neutrino energy of 0.6 GeV has been measured using the PØD, one of the subdetectors of the T2K near detector. The production rate was measured for data taking periods when the PØD contained water (2.64 ×1020 protons-on-target) and also periods without water (3.49 ×1020 protons-on-target). A measurement of the neutral current single π0 production rate on water is made using appropriate subtraction of the production rate with water in from the rate with water out of the target region. The subtraction analysis yields 106 ±41 ±69 signal events where the uncertainties are statistical (stat.) and systematic (sys.) respectively. This is consistent with the prediction of 157 events from the nominal simulation. The measured to expected ratio is 0.68 ±0.26 (stat ) ±0.44 (sys ) ±0.12 (flux ) . The nominal simulation uses a flux integrated cross section of 7.63 ×10-39 cm2 per nucleon with an average neutrino interaction energy of 1.3 GeV.

  4. Single Particle Analysis by Combined Chemical Imaging to Study Episodic Air Pollution Events in Vienna

    NASA Astrophysics Data System (ADS)

    Ofner, Johannes; Eitenberger, Elisabeth; Friedbacher, Gernot; Brenner, Florian; Hutter, Herbert; Schauer, Gerhard; Kistler, Magdalena; Greilinger, Marion; Lohninger, Hans; Lendl, Bernhard; Kasper-Giebl, Anne

    2017-04-01

    The aerosol composition of a city like Vienna is characterized by a complex interaction of local emissions and atmospheric input on a regional and continental scale. The identification of major aerosol constituents for basic source appointment and air quality issues needs a high analytical effort. Exceptional episodic air pollution events strongly change the typical aerosol composition of a city like Vienna on a time-scale of few hours to several days. Analyzing the chemistry of particulate matter from these events is often hampered by the sampling time and related sample amount necessary to apply the full range of bulk analytical methods needed for chemical characterization. Additionally, morphological and single particle features are hardly accessible. Chemical Imaging evolved to a powerful tool for image-based chemical analysis of complex samples. As a complementary technique to bulk analytical methods, chemical imaging can address a new access to study air pollution events by obtaining major aerosol constituents with single particle features at high temporal resolutions and small sample volumes. The analysis of the chemical imaging datasets is assisted by multivariate statistics with the benefit of image-based chemical structure determination for direct aerosol source appointment. A novel approach in chemical imaging is combined chemical imaging or so-called multisensor hyperspectral imaging, involving elemental imaging (electron microscopy-based energy dispersive X-ray imaging), vibrational imaging (Raman micro-spectroscopy) and mass spectrometric imaging (Time-of-Flight Secondary Ion Mass Spectrometry) with subsequent combined multivariate analytics. Combined chemical imaging of precipitated aerosol particles will be demonstrated by the following examples of air pollution events in Vienna: Exceptional episodic events like the transformation of Saharan dust by the impact of the city of Vienna will be discussed and compared to samples obtained at a high alpine background site (Sonnblick Observatory, Saharan Dust Event from April 2016). Further, chemical imaging of biological aerosol constituents of an autumnal pollen breakout in Vienna, with background samples from nearby locations from November 2016 will demonstrate the advantages of the chemical imaging approach. Additionally, the chemical fingerprint of an exceptional air pollution event from a local emission source, caused by the pull down process of a building in Vienna will unravel the needs for multisensor imaging, especially the combinational access. Obtained chemical images will be correlated to bulk analytical results. Benefits of the overall methodical access by combining bulk analytics and combined chemical imaging of exceptional episodic air pollution events will be discussed.

  5. Seismicity of Central Asia as Observed on Three IMS Stations

    DTIC Science & Technology

    2008-09-01

    and BVAR are all high-quality seismic arrays . Noise levels at the stations are generally acceptable for the period reviewed, except during the...following conditions: (1) a 4.5-Hz intermittent noise source at MKAR, (2) periodic high-frequency bursts on portions of the SONM array , and (3) a...seismic events (including single station events) observable on three central Asian IMS seismic array stations: Makanchi, Kazakhstan (MKAR); Songino

  6. Double point source W-phase inversion: Real-time implementation and automated model selection

    USGS Publications Warehouse

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  7. An Optimal Image-Based Method for Identification of Acoustic Emission (AE) Sources in Plate-Like Structures Using a Lead Zirconium Titanate (PZT) Sensor Array.

    PubMed

    Yan, Gang; Zhou, Li

    2018-02-21

    This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method.

  8. An Optimal Image-Based Method for Identification of Acoustic Emission (AE) Sources in Plate-Like Structures Using a Lead Zirconium Titanate (PZT) Sensor Array

    PubMed Central

    Zhou, Li

    2018-01-01

    This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method. PMID:29466310

  9. Physical effects of mechanical design parameters on photon sensitivity and spatial resolution performance of a breast-dedicated PET system.

    PubMed

    Spanoudaki, V C; Lau, F W Y; Vandenbroucke, A; Levin, C S

    2010-11-01

    This study aims to address design considerations of a high resolution, high sensitivity positron emission tomography scanner dedicated to breast imaging. The methodology uses a detailed Monte Carlo model of the system structures to obtain a quantitative evaluation of several performance parameters. Special focus was given to the effect of dense mechanical structures designed to provide mechanical robustness and thermal regulation to the minuscule and temperature sensitive detectors. For the energies of interest around the photopeak (450-700 keV energy window), the simulation results predict a 6.5% reduction in the single photon detection efficiency and a 12.5% reduction in the coincidence photon detection efficiency in the case that the mechanical structures are interspersed between the detectors. However for lower energies, a substantial increase in the number of detected events (approximately 14% and 7% for singles at a 100-200 keV energy window and coincidences at a lower energy threshold of 100 keV, respectively) was observed with the presence of these structures due to backscatter. The number of photon events that involve multiple interactions in various crystal elements is also affected by the presence of the structures. For photon events involving multiple interactions among various crystal elements, the coincidence photon sensitivity is reduced by as much as 20% for a point source at the center of the field of view. There is no observable effect on the intrinsic and the reconstructed spatial resolution and spatial resolution uniformity. Mechanical structures can have a considerable effect on system sensitivity, especially for systems processing multi-interaction photon events. This effect, however, does not impact the spatial resolution. Various mechanical structure designs are currently under evaluation in order to achieve optimum trade-off between temperature stability, accurate detector positioning, and minimum influence on system performance.

  10. Physical effects of mechanical design parameters on photon sensitivity and spatial resolution performance of a breast-dedicated PET system

    PubMed Central

    Spanoudaki, V. C.; Lau, F. W. Y.; Vandenbroucke, A.; Levin, C. S.

    2010-01-01

    Purpose: This study aims to address design considerations of a high resolution, high sensitivity positron emission tomography scanner dedicated to breast imaging. Methods: The methodology uses a detailed Monte Carlo model of the system structures to obtain a quantitative evaluation of several performance parameters. Special focus was given to the effect of dense mechanical structures designed to provide mechanical robustness and thermal regulation to the minuscule and temperature sensitive detectors. Results: For the energies of interest around the photopeak (450–700 keV energy window), the simulation results predict a 6.5% reduction in the single photon detection efficiency and a 12.5% reduction in the coincidence photon detection efficiency in the case that the mechanical structures are interspersed between the detectors. However for lower energies, a substantial increase in the number of detected events (approximately 14% and 7% for singles at a 100–200 keV energy window and coincidences at a lower energy threshold of 100 keV, respectively) was observed with the presence of these structures due to backscatter. The number of photon events that involve multiple interactions in various crystal elements is also affected by the presence of the structures. For photon events involving multiple interactions among various crystal elements, the coincidence photon sensitivity is reduced by as much as 20% for a point source at the center of the field of view. There is no observable effect on the intrinsic and the reconstructed spatial resolution and spatial resolution uniformity. Conclusions: Mechanical structures can have a considerable effect on system sensitivity, especially for systems processing multi-interaction photon events. This effect, however, does not impact the spatial resolution. Various mechanical structure designs are currently under evaluation in order to achieve optimum trade-off between temperature stability, accurate detector positioning, and minimum influence on system performance. PMID:21158296

  11. Stress in cynomolgus monkeys (Macaca fascicularis) subjected to long-distance transport and simulated transport housing conditions.

    PubMed

    Fernström, A L; Sutian, W; Royo, F; Westlund, K; Nilsson, T; Carlsson, H-E; Paramastri, Y; Pamungkas, J; Sajuthi, D; Schapiro, S J; Hau, J

    2008-11-01

    The stress associated with transportation of non-human primates used in scientific research is an important but almost unexplored part of laboratory animal husbandry. The procedures and routines concerning transport are not only important for the animals' physical health but also for their mental health as well. The transport stress in cynomolgus monkeys (Macaca fascicularis) was studied in two experiments. In Experiment 1, 25 adult female cynomolgus monkeys were divided into five groups of five animals each that received different diets during the transport phase of the experiment. All animals were transported in conventional single animal transport cages with no visual or tactile contact with conspecifics. The animals were transported by lorry for 24 h at ambient temperatures ranging between 20 degrees C and 35 degrees C. Urine produced before, during and after transport was collected and analysed for cortisol by enzyme-linked immunosorbent assay (ELISA). All monkeys exhibited a significant increase in cortisol excretion per time unit during the transport and on the first day following transport.Although anecdotal reports concerning diet during transport, including the provision of fruits and/or a tranquiliser, was thought likely to influence stress responses, these were not corrobated by the present study. In Experiment 2, behavioural data were collected from 18 cynomolgus macaques before and after transfer from group cages to either single or pair housing, and also before and after a simulated transport, in which the animals were housed in transport cages. The single housed monkeys were confined to single transport cages and the pair housed monkeys were kept in their pairs in double size cages. Both pair housed and singly housed monkeys showed clear behavioural signs of stress soon after their transfer out of their group cages.However, stress-associated behaviours were more prevalent in singly housed animals than in pair housed animals, and these behaviours persisted for a longer time after the simulated transport housing event than in the pair housed monkeys. Our data confirm that the transport of cynomolgus monkeys is stressful and suggest that it would be beneficial for the cynomolgus monkeys to be housed and transported in compatible pairs from the time they leave their group cages at the source country breeding facility until they arrive at their final laboratory destination in the country of use.

  12. An Overview of Grain Growth Theories for Pure Single Phase Systems,

    DTIC Science & Technology

    1986-10-01

    the fundamental causes for these distributions. This Blanc and Mocellin (1979) and Carnal and Mocellin (1981j set out to do. 7.1 Monte-Carlo Simulations...termed event B) (in 2-D) of 3-sided grains. (2) Neighbour-switching (termed event C). Blanc and Mocellin (1979) dealt with 2-D sections through...Kurtz and Carpay (1980a). 7.2 Analytical Method to Obtain fn Carnal and Mocellin (1981) obtained the distribution of grain coordination numbers in

  13. Bolide Airbursts as a Seismic Source for the 2018 Mars InSight Mission

    NASA Astrophysics Data System (ADS)

    Stevanović, J.; Teanby, N. A.; Wookey, J.; Selby, N.; Daubar, I. J.; Vaubaillon, J.; Garcia, R.

    2017-10-01

    In 2018, NASA will launch InSight, a single-station suite of geophysical instruments, designed to characterise the martian interior. We investigate the seismo-acoustic signal generated by a bolide entering the martian atmosphere and exploding in a terminal airburst, and assess this phenomenon as a potential observable for the SEIS seismic payload. Terrestrial analogue data from four recent events are used to identify diagnostic airburst characteristics in both the time and frequency domain. In order to estimate a potential number of detectable events for InSight, we first model the impactor source population from observations made on the Earth, scaled for planetary radius, entry velocity and source density. We go on to calculate a range of potential airbursts from the larger incident impactor population. We estimate there to be {˜} 1000 events of this nature per year on Mars. To then derive a detectable number of airbursts for InSight, we scale this number according to atmospheric attenuation, air-to-ground coupling inefficiencies and by instrument capability for SEIS. We predict between 10-200 detectable events per year for InSight.

  14. Modeling Large Scale Circuits Using Massively Parallel Descrete-Event Simulation

    DTIC Science & Technology

    2013-06-01

    exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power consumption...grow to exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power...Warp Speed 10.0. 2.0 INTRODUCTION As supercomputer systems approach exascale , the core count will exceed 1024 and number of transistors used in

  15. Planar location of the simulative acoustic source based on fiber optic sensor array

    NASA Astrophysics Data System (ADS)

    Liang, Yi-Jun; Liu, Jun-feng; Zhang, Qiao-ping; Mu, Lin-lin

    2010-06-01

    A fiber optic sensor array which is structured by four Sagnac fiber optic sensors is proposed to detect and locate a simulative source of acoustic emission (AE). The sensing loops of Sagnac interferometer (SI) are regarded as point sensors as their small size. Based on the derived output light intensity expression of SI, the optimum work condition of the Sagnac fiber optic sensor is discussed through the simulation of MATLAB. Four sensors are respectively placed on a steel plate to structure the sensor array and the location algorithms are expatiated. When an impact is generated by an artificial AE source at any position of the plate, the AE signal will be detected by four sensors at different times. With the help of a single chip microcomputer (SCM) which can calculate the position of the AE source and display it on LED, we have implemented an intelligent detection and location.

  16. Heralded quantum repeater based on the scattering of photons off single emitters using parametric down-conversion source.

    PubMed

    Song, Guo-Zhu; Wu, Fang-Zhou; Zhang, Mei; Yang, Guo-Jian

    2016-06-28

    Quantum repeater is the key element in quantum communication and quantum information processing. Here, we investigate the possibility of achieving a heralded quantum repeater based on the scattering of photons off single emitters in one-dimensional waveguides. We design the compact quantum circuits for nonlocal entanglement generation, entanglement swapping, and entanglement purification, and discuss the feasibility of our protocols with current experimental technology. In our scheme, we use a parametric down-conversion source instead of ideal single-photon sources to realize the heralded quantum repeater. Moreover, our protocols can turn faulty events into the detection of photon polarization, and the fidelity can reach 100% in principle. Our scheme is attractive and scalable, since it can be realized with artificial solid-state quantum systems. With developed experimental technique on controlling emitter-waveguide systems, the repeater may be very useful in long-distance quantum communication.

  17. Heralded quantum repeater based on the scattering of photons off single emitters using parametric down-conversion source

    PubMed Central

    Song, Guo-Zhu; Wu, Fang-Zhou; Zhang, Mei; Yang, Guo-Jian

    2016-01-01

    Quantum repeater is the key element in quantum communication and quantum information processing. Here, we investigate the possibility of achieving a heralded quantum repeater based on the scattering of photons off single emitters in one-dimensional waveguides. We design the compact quantum circuits for nonlocal entanglement generation, entanglement swapping, and entanglement purification, and discuss the feasibility of our protocols with current experimental technology. In our scheme, we use a parametric down-conversion source instead of ideal single-photon sources to realize the heralded quantum repeater. Moreover, our protocols can turn faulty events into the detection of photon polarization, and the fidelity can reach 100% in principle. Our scheme is attractive and scalable, since it can be realized with artificial solid-state quantum systems. With developed experimental technique on controlling emitter-waveguide systems, the repeater may be very useful in long-distance quantum communication. PMID:27350159

  18. United States Marine Corps Motor Transport Mechanic-to-Equipment Ratio

    DTIC Science & Technology

    time motor transport equipment remains in maintenance at the organizational command level. This thesis uses a discrete event simulation model of the...applied to a single experiment that allows for assessment of risk of not achieving the objective. Inter-arrival time, processing time, work schedule

  19. Differentiability of simulated MEG hippocampal, medial temporal and neocortical temporal epileptic spike activity.

    PubMed

    Stephen, Julia M; Ranken, Doug M; Aine, Cheryl J; Weisend, Michael P; Shih, Jerry J

    2005-12-01

    Previous studies have shown that magnetoencephalography (MEG) can measure hippocampal activity, despite the cylindrical shape and deep location in the brain. The current study extended this work by examining the ability to differentiate the hippocampal subfields, parahippocampal cortex, and neocortical temporal sources using simulated interictal epileptic activity. A model of the hippocampus was generated on the MRIs of five subjects. CA1, CA3, and dentate gyrus of the hippocampus were activated as well as entorhinal cortex, presubiculum, and neocortical temporal cortex. In addition, pairs of sources were activated sequentially to emulate various hypotheses of mesial temporal lobe seizure generation. The simulated MEG activity was added to real background brain activity from the five subjects and modeled using a multidipole spatiotemporal modeling technique. The waveforms and source locations/orientations for hippocampal and parahippocampal sources were differentiable from neocortical temporal sources. In addition, hippocampal and parahippocampal sources were differentiated to varying degrees depending on source. The sequential activation of hippocampal and parahippocampal sources was adequately modeled by a single source; however, these sources were not resolvable when they overlapped in time. These results suggest that MEG has the sensitivity to distinguish parahippocampal and hippocampal spike generators in mesial temporal lobe epilepsy.

  20. Source of 1629 Banda Mega-Thrust Earthquake and Tsunami: Implications for Tsunami Hazard Evaluation in Eastern Indonesia

    NASA Astrophysics Data System (ADS)

    Major, J. R.; Liu, Z.; Harris, R. A.; Fisher, T. L.

    2011-12-01

    Using Dutch records of geophysical events in Indonesia over the past 400 years, and tsunami modeling, we identify tsunami sources that have caused severe devastation in the past and are likely to reoccur in the near future. The earthquake history of Western Indonesia has received much attention since the 2004 Sumatra earthquakes and subsequent events. However, strain rates along a variety of plate boundary segments are just as high in eastern Indonesia where the earthquake history has not been investigated. Due to the rapid population growth in this region it is essential and urgent to evaluate its earthquake and tsunami hazards. Arthur Wichmann's 'Earthquakes of the Indian Archipelago' shows that there were 30 significant earthquakes and 29 tsunami between 1629 to 1877. One of the largest and best documented is the great earthquake and tsunami effecting the Banda islands on 1 August, 1629. It caused severe damage from a 15 m tsunami that arrived at the Banda Islands about a half hour after the earthquake. The earthquake was also recorded 230 km away in Ambon, but no tsunami is mentioned. This event was followed by at least 9 years of aftershocks. The combination of these observations indicates that the earthquake was most likely a mega-thrust event. We use a numerical simulation of the tsunami to locate the potential sources of the 1629 mega-thrust event and evaluate the tsunami hazard in Eastern Indonesia. The numerical simulation was tested to establish the tsunami run-up amplification factor for this region by tsunami simulations of the 1992 Flores Island (Hidayat et al., 1995) and 2006 Java (Katoet al., 2007) earthquake events. The results yield a tsunami run-up amplification factor of 1.5 and 3, respectively. However, the Java earthquake is a unique case of slow rupture that was hardly felt. The fault parameters of recent earthquakes in the Banda region are used for the models. The modeling narrows the possibilities of mega-thrust events the size of the one in 1629 to the Seram and Timor Troughs. For the Seram Trough source a Mw 8.8 produces run-up heights in the Banda Islands of 15.5 m with an arrival time of 17 minuets. For a Timor Trough earthquake near the Tanimbar Islands a Mw 9.2 is needed to produce a 15 m run-up height with an arrival time of 25 minuets. The main problem with the Timor Trough source is that it predicts run-up heights in Ambon of 10 m, which would likely have been recorded. Therefore, we conclude that the most likely source of the 1629 mega-thrust earthquake is the Seram Trough. No large earthquakes are reported along the Seram Trough for over 200 years although high rates of strain are measured across it. This study suggests that the earthquake triggers from this fault zone could be extremely devastating to Eastern Indonesia. We strive to raise the awareness to the local government to not underestimate the natural hazard of this region based on lessons learned from the 2004 Sumatra and 2011 Tohoku tsunamigenic mega-thrust earthquakes.

  1. On the formation of runaway stars BN and x in the Orion Nebula Cluster

    NASA Astrophysics Data System (ADS)

    Farias, J. P.; Tan, J. C.

    2018-05-01

    We explore scenarios for the dynamical ejection of stars BN and x from source I in the Kleinmann-Low nebula of the Orion Nebula Cluster (ONC), which is important because it is the closest region of massive star formation. This ejection would cause source I to become a close binary or a merger product of two stars. We thus consider binary-binary encounters as the mechanism to produce this event. By running a large suite of N-body simulations, we find that it is nearly impossible to match the observations when using the commonly adopted masses for the participants, especially a source I mass of 7 M⊙. The only way to recreate the event is if source I is more massive, that is, 20 M⊙. However, even in this case, the likelihood of reproducing the observed system is low. We discuss the implications of these results for understanding this important star-forming region.

  2. The OGLE view of microlensing towards the Magellanic Clouds - II. OGLE-II Small Magellanic Cloud data

    NASA Astrophysics Data System (ADS)

    Wyrzykowski, Ł.; Kozłowski, S.; Skowron, J.; Belokurov, V.; Smith, M. C.; Udalski, A.; Szymański, M. K.; Kubiak, M.; Pietrzyński, G.; Soszyński, I.; Szewczyk, O.

    2010-09-01

    The primary goal of this paper is to provide evidence that can prove true or false the hypothesis that dark matter in the Galactic halo can clump into stellar-mass compact objects. If such objects exist, they would act as lenses to external sources in the Magellanic Clouds, giving rise to an observable effect of microlensing. We present the results of our search for such events, based on data from the second phase of the OGLE survey (1996-2000) towards the Small Magellanic Cloud (SMC). The data set we used comprises 2.1 million monitored sources distributed over an area of 2.4deg2. We found only one microlensing event candidate, however its poor-quality light curve limited our discussion of the exact distance to the lensing object. Given a single event, taking blending (crowding of stars) into account for the detection-efficiency simulations and deriving the Hubble Space Telescope (HST)-corrected number of monitored stars, the microlensing optical depth is τ = (1.55 +/- 1.55) × 10-7. This result is consistent with the expected SMC self-lensing signal, with no need to introduce dark matter microlenses. Rejecting the unconvincing event leads to an upper limit on the fraction of dark matter in the form of massive compact halo objects (MACHOs) of f < 20 per cent for deflector masses around 0.4Msolar and f < 11 per cent for masses between 0.003 and 0.2Msolar (95 per cent confidence limit). Our result indicates that the Milky Way's dark matter is unlikely to be clumpy and to form compact objects in the subsolar-mass range. Based on observations obtained with the 1.3-m Warsaw Telescope at the Las Campanas Observatory of the Carnegie Institution of Washington. E-mail: wyrzykow@ast.cam.ac.uk ‡ Name pronunciation: Woocash Vizhikovsky

  3. Investigation of 2-stage meta-analysis methods for joint longitudinal and time-to-event data through simulation and real data application.

    PubMed

    Sudell, Maria; Tudur Smith, Catrin; Gueyffier, François; Kolamunnage-Dona, Ruwanthi

    2018-04-15

    Joint modelling of longitudinal and time-to-event data is often preferred over separate longitudinal or time-to-event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time-to-event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta-analysis of joint model estimates from multiple studies. We propose a 2-stage method for meta-analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta-analyses of separate longitudinal or time-to-event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta-analytic setting where association exists between the longitudinal and time-to-event outcomes. Where evidence of association between longitudinal and time-to-event outcomes exists, results from joint models over standalone analyses should be pooled in 2-stage meta-analyses. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  4. Mathematical Constraints on the Use of Transmission Line Models for Simulating Initial Breakdown Pulses in Lightning Discharges

    NASA Astrophysics Data System (ADS)

    da Silva, C. L.; Merrill, R. A.; Pasko, V. P.

    2015-12-01

    A significant portion of the in-cloud lightning development is observed as a series of initial breakdown pulses (IBPs) that are characterized by an abrupt change in the electric field at a remote sensor. Recent experimental and theoretical studies have attributed this process to the stepwise elongation of an initial lightning leader inside the thunderstorm [da Silva and Pasko, JGR, 120, 4989-5009, 2015, and references therein]. Attempts to visually observe these events are hampered due to the fact that clouds are opaque to optical radiation. Due to this reason, throughout the last decade, a number of researchers have used the so-called transmission line models (also commonly referred to as engineering models), widely employed for return stroke simulations, to simulate the waveshapes of IBPs, and also of narrow bipolar events. The transmission line (TL) model approach is to prescribe the source current dynamics in a certain manner to match the measured E-field change waveform, with the purpose of retrieving key information about the source, such as its height, peak current, size, speed of charge motion, etc. Although the TL matching method is not necessarily physics-driven, the estimated source characteristics can give insights on the dominant length- and time-scales, as well as, on the energetics of the source. This contributes to better understanding of the environment where the onset and early stages of lightning development takes place.In the present work, we use numerical modeling to constrain the number of source parameters that can be confidently inferred from the observed far-field IBP waveforms. We compare different modified TL models (i.e., with different attenuation behaviors) to show that they tend to produce similar waveforms in conditions where the channel is short. We also demonstrate that it is impossible to simultaneously retrieve the speed of source current propagation and channel length from an observed IBP waveform, in contrast to what has been previously done in the literature. Finally, we demonstrate that the simulated field-to-current conversion factor in IBP sources can vary by more than one order of magnitude, making peak current estimates for intracloud lightning processes a challenging task.

  5. Characterizing directional variations in long-period ground motion amplifications in the Kanto Basin, Japan

    NASA Astrophysics Data System (ADS)

    Mukai, Y.; Furumura, T.; Maeda, T.

    2017-12-01

    In the Kanto Basin (including Tokyo in Japan), the long-period (T=3-10 s) ground motions are strongly developed when large earthquakes occur nearby. The amplitude of the long-period ground motion in the basin varies strongly among earthquakes; it is tremendous from the earthquakes in Niigata (northwest of Kanto), but is several times weaker from the earthquakes in Tohoku (north of Kanto). In this study, we examined the cause of such azimuthal-dependent amplitude variation for the 2004 Niigata Chuetsu (M6.8) and the 2011 Fukushima Hamadori (M7.0) earthquake based on numerical simulations of seismic wave propagation by the finite-difference method. We first examined the non-isotropic source-radiation effect of these events. By performing numerical simulations for different strike angles of these source faults, significant variation in amplitude of the long-period ground motions were observed in Tokyo for both the events. Among tested strike angles, the source of the 2004 event (strike = 212 deg.) produced the largest long-period ground motion due to strong radiation of surface wave towards the Kanto Basin, while the 2011 event (strike = 132 deg.) produced the least. The minimum-to-maximum ratio of their amplitudes with respect to strike angle is about 2 and 1.3, respectively. These investigations suggest the source radiation effect considerably contributes to the variations of the long-period ground motions. We then examined the effect of the 3D structure of the Kanto Basin on the generation of the long-period ground motion. For the 2004 event, we found that the long-period signal first arrives at the central Tokyo from the western edge of the Kanto Basin. Then, later signals containing both the Rayleigh and Love waves were amplified dramatically due to the localized low-velocity structure to the northwestern part of the basin. On the other hand, in the case of the 2011 event, the seismic waves propagating towards the basin were dissipated significantly as it travels over the ridge structure of the basement in the northern part of the basin, where the seismic wave speed is faster than the surroundings. Therefore, the large variation of the long-period ground motion among earthquakes occurs due to the combined effects of source radiation and propagation properties in the 3D heterogeneous structure of the Kanto Basin.

  6. Computing in the presence of soft bit errors. [caused by single event upset on spacecraft

    NASA Technical Reports Server (NTRS)

    Rasmussen, R. D.

    1984-01-01

    It is shown that single-event-upsets (SEUs) due to cosmic rays are a significant source of single bit error in spacecraft computers. The physical mechanism of SEU, electron hole generation by means of Linear Energy Transfer (LET), it discussed with reference made to the results of a study of the environmental effects on computer systems of the Galileo spacecraft. Techniques for making software more tolerant of cosmic ray effects are considered, including: reducing the number of registers used by the software; continuity testing of variables; redundant execution of major procedures for error detection; and encoding state variables to detect single-bit changes. Attention is also given to design modifications which may reduce the cosmic ray exposure of on-board hardware. These modifications include: shielding components operating in LEO; removing low-power Schottky parts; and the use of CMOS diodes. The SEU parameters of different electronic components are listed in a table.

  7. Multi-point laser ignition device

    DOEpatents

    McIntyre, Dustin L.; Woodruff, Steven D.

    2017-01-17

    A multi-point laser device comprising a plurality of optical pumping sources. Each optical pumping source is configured to create pumping excitation energy along a corresponding optical path directed through a high-reflectivity mirror and into substantially different locations within the laser media thereby producing atomic optical emissions at substantially different locations within the laser media and directed along a corresponding optical path of the optical pumping source. An output coupler and one or more output lenses are configured to produce a plurality of lasing events at substantially different times, locations or a combination thereof from the multiple atomic optical emissions produced at substantially different locations within the laser media. The laser media is a single continuous media, preferably grown on a single substrate.

  8. Statistical Analysis of Tsunami Variability

    NASA Astrophysics Data System (ADS)

    Zolezzi, Francesca; Del Giudice, Tania; Traverso, Chiara; Valfrè, Giulio; Poggi, Pamela; Parker, Eric J.

    2010-05-01

    The purpose of this paper was to investigate statistical variability of seismically generated tsunami impact. The specific goal of the work was to evaluate the variability in tsunami wave run-up due to uncertainty in fault rupture parameters (source effects) and to the effects of local bathymetry at an individual location (site effects). This knowledge is critical to development of methodologies for probabilistic tsunami hazard assessment. Two types of variability were considered: • Inter-event; • Intra-event. Generally, inter-event variability refers to the differences of tsunami run-up at a given location for a number of different earthquake events. The focus of the current study was to evaluate the variability of tsunami run-up at a given point for a given magnitude earthquake. In this case, the variability is expected to arise from lack of knowledge regarding the specific details of the fault rupture "source" parameters. As sufficient field observations are not available to resolve this question, numerical modelling was used to generate run-up data. A scenario magnitude 8 earthquake in the Hellenic Arc was modelled. This is similar to the event thought to have caused the infamous 1303 tsunami. The tsunami wave run-up was computed at 4020 locations along the Egyptian coast between longitudes 28.7° E and 33.8° E. Specific source parameters (e.g. fault rupture length and displacement) were varied, and the effects on wave height were determined. A Monte Carlo approach considering the statistical distribution of the underlying parameters was used to evaluate the variability in wave height at locations along the coast. The results were evaluated in terms of the coefficient of variation of the simulated wave run-up (standard deviation divided by mean value) for each location. The coefficient of variation along the coast was between 0.14 and 3.11, with an average value of 0.67. The variation was higher in areas of irregular coast. This level of variability is similar to that seen in ground motion attenuation correlations used for seismic hazard assessment. The second issue was intra-event variability. This refers to the differences in tsunami wave run-up along a section of coast during a single event. Intra-event variability investigated directly considering field observations. The tsunami events used in the statistical evaluation were selected on the basis of the completeness and reliability of the available data. Tsunami considered for the analysis included the recent and well surveyed tsunami of Boxing Day 2004 (Great Indian Ocean Tsunami), Java 2006, Okushiri 1993, Kocaeli 1999, Messina 1908 and a case study of several historic events in Hawaii. Basic statistical analysis was performed on the field observations from these tsunamis. For events with very wide survey regions, the run-up heights have been grouped in order to maintain a homogeneous distance from the source. Where more than one survey was available for a given event, the original datasets were maintained separately to avoid combination of non-homogeneous data. The observed run-up measurements were used to evaluate the minimum, maximum, average, standard deviation and coefficient of variation for each data set. The minimum coefficient of variation was 0.12 measured for the 2004 Boxing Day tsunami at Nias Island (7 data) while the maximum is 0.98 for the Okushiri 1993 event (93 data). The average coefficient of variation is of the order of 0.45.

  9. Cylindrical gate all around Schottky barrier MOSFET with insulated shallow extensions at source/drain for removal of ambipolarity: a novel approach

    NASA Astrophysics Data System (ADS)

    Kumar, Manoj; Pratap, Yogesh; Haldar, Subhasis; Gupta, Mridula; Gupta, R. S.

    2017-12-01

    In this paper TCAD-based simulation of a novel insulated shallow extension (ISE) cylindrical gate all around (CGAA) Schottky barrier (SB) MOSFET has been reported, to eliminate the suicidal ambipolar behavior (bias-dependent OFF state leakage current) of conventional SB-CGAA MOSFET by blocking the metal-induced gap states as well as unwanted charge sharing between source/channel and drain/channel regions. This novel structure offers low barrier height at the source and offers high ON-state current. The I ON/I OFF of ISE-CGAA-SB-MOSFET increases by 1177 times and offers steeper subthreshold slope (~60 mV/decade). However a little reduction in peak cut off frequency is observed and to further improve the cut-off frequency dual metal gate architecture has been employed and a comparative assessment of single metal gate, dual metal gate, single metal gate with ISE, and dual metal gate with ISE has been presented. The improved performance of Schottky barrier CGAA MOSFET by the incorporation of ISE makes it an attractive candidate for CMOS digital circuit design. The numerical simulation is performed using the ATLAS-3D device simulator.

  10. Single-Nanoparticle Photoelectrochemistry at a Nanoparticulate TiO2 -Filmed Ultramicroelectrode.

    PubMed

    Peng, Yue-Yi; Ma, Hui; Ma, Wei; Long, Yi-Tao; Tian, He

    2018-03-26

    An ultrasensitive photoelectrochemical method for achieving real-time detection of single nanoparticle collision events is presented. Using a micrometer-thick nanoparticulate TiO 2 -filmed Au ultra-microelectrode (TiO 2 @Au UME), a sub-millisecond photocurrent transient was observed for an individual N719-tagged TiO 2 (N719@TiO 2 ) nanoparticle and is due to the instantaneous collision process. Owing to a trap-limited electron diffusion process as the rate-limiting step, a random three-dimensional diffusion model was developed to simulate electron transport dynamics in TiO 2 film. The combination of theoretical simulation and high-resolution photocurrent measurement allow electron-transfer information of a single N719@TiO 2 nanoparticle to be quantified at single-molecule accuracy and the electron diffusivity and the electron-collection efficiency of TiO 2 @Au UME to be estimated. This method provides a test for studies of photoinduced electron transfer at the single-nanoparticle level. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Improved phase arrival estimate and location for local earthquakes in South Korea

    NASA Astrophysics Data System (ADS)

    Morton, E. A.; Rowe, C. A.; Begnaud, M. L.

    2012-12-01

    The Korean Institute of Geoscience and Mineral Resources (KIGAM) and the Korean Meteorological Agency (KMA) regularly report local (distance < ~1200 km) seismicity recorded with their networks; we obtain preliminary event location estimates as well as waveform data, but no phase arrivals are reported, so the data are not immediately useful for earthquake location. Our goal is to identify seismic events that are sufficiently well-located to provide accurate seismic travel-time information for events within the KIGAM and KMA networks, and also recorded by some regional stations. Toward that end, we are using a combination of manual phase identification and arrival-time picking, with waveform cross-correlation, to cluster events that have occurred in close proximity to one another, which allows for improved phase identification by comparing the highly correlating waveforms. We cross-correlate the known events with one another on 5 seismic stations and cluster events that correlate above a correlation coefficient threshold of 0.7, which reveals few clusters containing few events each. The small number of repeating events suggests that the online catalogs have had mining and quarry blasts removed before publication, as these can contribute significantly to repeating seismic sources in relatively aseismic regions such as South Korea. The dispersed source locations in our catalog, however, are ideal for seismic velocity modeling by providing superior sampling through the dense seismic station arrangement, which produces favorable event-to-station ray path coverage. Following careful manual phase picking on 104 events chosen to provide adequate ray coverage, we re-locate the events to obtain improved source coordinates. The re-located events are used with Thurber's Simul2000 pseudo-bending local tomography code to estimate the crustal structure on the Korean Peninsula, which is an important contribution to ongoing calibration for events of interest in the region.

  12. Approach to identifying pollutant source and matching flow field

    NASA Astrophysics Data System (ADS)

    Liping, Pang; Yu, Zhang; Hongquan, Qu; Tao, Hu; Wei, Wang

    2013-07-01

    Accidental pollution events often threaten people's health and lives, and it is necessary to identify a pollutant source rapidly so that prompt actions can be taken to prevent the spread of pollution. But this identification process is one of the difficulties in the inverse problem areas. This paper carries out some studies on this issue. An approach using single sensor information with noise was developed to identify a sudden continuous emission trace pollutant source in a steady velocity field. This approach first compares the characteristic distance of the measured concentration sequence to the multiple hypothetical measured concentration sequences at the sensor position, which are obtained based on a source-three-parameter multiple hypotheses. Then we realize the source identification by globally searching the optimal values with the objective function of the maximum location probability. Considering the large amount of computation load resulting from this global searching, a local fine-mesh source search method based on priori coarse-mesh location probabilities is further used to improve the efficiency of identification. Studies have shown that the flow field has a very important influence on the source identification. Therefore, we also discuss the impact of non-matching flow fields with estimation deviation on identification. Based on this analysis, a method for matching accurate flow field is presented to improve the accuracy of identification. In order to verify the practical application of the above method, an experimental system simulating a sudden pollution process in a steady flow field was set up and some experiments were conducted when the diffusion coefficient was known. The studies showed that the three parameters (position, emission strength and initial emission time) of the pollutant source in the experiment can be estimated by using the method for matching flow field and source identification.

  13. Search for electroweak single top quark production with cdf in proton - anti-proton collisions at √s = 1.96-TeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, Thorsten

    2005-06-17

    In this thesis two searches for electroweak single top quark production with the CDF experiment have been presented, a cutbased search and an iterated discriminant analysis. Both searches find no significant evidence for electroweak single top production using a data set corresponding to an integrated luminosity of 162 pb -1 collected with CDF. Therefore limits on s- and t-channel single top production are determined using a likelihood technique. For the cutbased search a likelihood function based on lepton charge times pseudorapidity of the non-bottom jet was used if exactly one bottom jet was identified in the event. In case ofmore » two identified bottom jets a likelihood function based on the total number of observed events was used. The systematic uncertainties have been treated in a Bayesian approach, all sources of systematic uncertainties have been integrated out. An improved signal modeling using the MadEvent Monte Carlo program matched to NLO calculations has been used. The obtained limits for the s- and t-channel single top production cross sections are 13.6 pb and 10.1 pb, respectively. To date, these are most stringent limits published for the s- and the t-channel single top quark production modes.« less

  14. PROGRESS TOWARDS NEXT GENERATION, WAVEFORM BASED THREE-DIMENSIONAL MODELS AND METRICS TO IMPROVE NUCLEAR EXPLOSION MONITORING IN THE MIDDLE EAST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savage, B; Peter, D; Covellone, B

    2009-07-02

    Efforts to update current wave speed models of the Middle East require a thoroughly tested database of sources and recordings. Recordings of seismic waves traversing the region from Tibet to the Red Sea will be the principal metric in guiding improvements to the current wave speed model. Precise characterizations of the earthquakes, specifically depths and faulting mechanisms, are essential to avoid mapping source errors into the refined wave speed model. Errors associated with the source are manifested in amplitude and phase changes. Source depths and paths near nodal planes are particularly error prone as small changes may severely affect themore » resulting wavefield. Once sources are quantified, regions requiring refinement will be highlighted using adjoint tomography methods based on spectral element simulations [Komatitsch and Tromp (1999)]. An initial database of 250 regional Middle Eastern events from 1990-2007, was inverted for depth and focal mechanism using teleseismic arrivals [Kikuchi and Kanamori (1982)] and regional surface and body waves [Zhao and Helmberger (1994)]. From this initial database, we reinterpreted a large, well recorded subset of 201 events through a direct comparison between data and synthetics based upon a centroid moment tensor inversion [Liu et al. (2004)]. Evaluation was done using both a 1D reference model [Dziewonski and Anderson (1981)] at periods greater than 80 seconds and a 3D model [Kustowski et al. (2008)] at periods of 25 seconds and longer. The final source reinterpretations will be within the 3D model, as this is the initial starting point for the adjoint tomography. Transitioning from a 1D to 3D wave speed model shows dramatic improvements when comparisons are done at shorter periods, (25 s). Synthetics from the 1D model were created through mode summations while those from the 3D simulations were created using the spectral element method. To further assess errors in source depth and focal mechanism, comparisons between the three methods were made. These comparisons help to identify problematic stations and sources which may bias the final solution. Estimates of standard errors were generated for each event's source depth and focal mechanism to identify poorly constrained events. A final, well characterized set of sources and stations will be then used to iteratively improve the wave speed model of the Middle East. After a few iterations during the adjoint inversion process, the sources will be reexamined and relocated to further reduce mapping of source errors into structural features. Finally, efforts continue in developing the infrastructure required to 'quickly' generate event kernels at the n-th iteration and invert for a new, (n+1)-th, wave speed model of the Middle East. While development of the infrastructure proceeds, initial tests using a limited number of events shows the 3D model, while showing vast improvement compared to the 1D model, still requires substantial modifications. Employing our new, full source set and iterating the adjoint inversions at successively shorter periods will lead to significant changes and refined wave speed structures of the Middle East.« less

  15. Nonadiabatic Dynamics in Single-Electron Tunneling Devices with Time-Dependent Density-Functional Theory

    NASA Astrophysics Data System (ADS)

    Dittmann, Niklas; Splettstoesser, Janine; Helbig, Nicole

    2018-04-01

    We simulate the dynamics of a single-electron source, modeled as a quantum dot with on-site Coulomb interaction and tunnel coupling to an adjacent lead in time-dependent density-functional theory. Based on this system, we develop a time-nonlocal exchange-correlation potential by exploiting analogies with quantum-transport theory. The time nonlocality manifests itself in a dynamical potential step. We explicitly link the time evolution of the dynamical step to physical relaxation timescales of the electron dynamics. Finally, we discuss prospects for simulations of larger mesoscopic systems.

  16. Nonadiabatic Dynamics in Single-Electron Tunneling Devices with Time-Dependent Density-Functional Theory.

    PubMed

    Dittmann, Niklas; Splettstoesser, Janine; Helbig, Nicole

    2018-04-13

    We simulate the dynamics of a single-electron source, modeled as a quantum dot with on-site Coulomb interaction and tunnel coupling to an adjacent lead in time-dependent density-functional theory. Based on this system, we develop a time-nonlocal exchange-correlation potential by exploiting analogies with quantum-transport theory. The time nonlocality manifests itself in a dynamical potential step. We explicitly link the time evolution of the dynamical step to physical relaxation timescales of the electron dynamics. Finally, we discuss prospects for simulations of larger mesoscopic systems.

  17. A full-angle Monte-Carlo scattering technique including cumulative and single-event Rutherford scattering in plasmas

    NASA Astrophysics Data System (ADS)

    Higginson, Drew P.

    2017-11-01

    We describe and justify a full-angle scattering (FAS) method to faithfully reproduce the accumulated differential angular Rutherford scattering probability distribution function (pdf) of particles in a plasma. The FAS method splits the scattering events into two regions. At small angles it is described by cumulative scattering events resulting, via the central limit theorem, in a Gaussian-like pdf; at larger angles it is described by single-event scatters and retains a pdf that follows the form of the Rutherford differential cross-section. The FAS method is verified using discrete Monte-Carlo scattering simulations run at small timesteps to include each individual scattering event. We identify the FAS regime of interest as where the ratio of temporal/spatial scale-of-interest to slowing-down time/length is from 10-3 to 0.3-0.7; the upper limit corresponds to Coulomb logarithm of 20-2, respectively. Two test problems, high-velocity interpenetrating plasma flows and keV-temperature ion equilibration, are used to highlight systems where including FAS is important to capture relevant physics.

  18. Investigation of runoff generation from anthropogenic sources with dissolved xenobiotics

    NASA Astrophysics Data System (ADS)

    Krein, A.; Pailler, J.; Guignard, C.; Iffly, J.; Pfister, L.; Hoffmann, L.

    2009-04-01

    In the experimental Mess basin (35 km2, Luxembourg) dissolved xenobiotics in surface water are used to study the influences of anthropogenic sources like separated sewer systems on runoff generation. Emerging contaminants like pharmaceuticals are of growing interest because of their use in large quantities in human and veterinary medicine. The amounts reaching surface waters depend on rainfall patterns, hydraulic conditions, consumption, metabolism, degradation, and disposal. The behaviour of endocrine disruptors including pharmaceuticals in the aquatic environment is widely unknown. The twelve molecules analyzed belong to three families: the estrogens, the antibiotics (sulfonamides, tetracyclines), and the painkillers (ibuprofen, diclofenac). Xenobiotics can be used as potential environmental tracers for untreated sewerage. Our results show that the concentrations are highly variable during flood events. The highest concentrations are reached in the first flush period, mainly during the rising limb of the flood hydrographs. As a result of the kinematic wave effect the concentration peak occurs in some cases a few hours after the discharge maximum. In floodwater (eleven floods, 66 samples) the highest concentrations were measured for ibuprofen (g/l range), estrone, and diclofenac (all ng/l range). From the tetracycline group, essentially tetracycline itself is of relevance, while the sulfonamides are mainly represented by sulfamethoxazole (all in ng/l range). In the Mess River the pharmaceuticals fluxes during flood events proved to be influenced by hydrological conditions. Different pharmaceuticals showed their concentration peaks during different times of a flood event. An example is the estrone peak that - during summer flash floods - often occurred one to two hours prior to the largest concentrations of the painkillers. This suggests for more sources than the sole storm drainage through the spillway of the single sewage water treatment plant, different transport velocities for single compounds or the existence of substance separating buffer storage in the stream network. In conditions of low intensity rainfall events and a few days of antecedent dry weather, acute peaks of pollution are discharged in the receiving waters. The influence of housing areas, main roads and sewer systems are obvious. These are characterized by rapid source depletion. Precipitation events of very small intensity and amount make themselves visible often as single peak storm events, which result predominantly from the sealed surface of this area. More accurate assessment of pollutant loads entering urban receiving water bodies is needed for improving urban storm water management and meeting water quality regulations.

  19. Classification of single-trial auditory events using dry-wireless EEG during real and motion simulated flight.

    PubMed

    Callan, Daniel E; Durantin, Gautier; Terzibas, Cengiz

    2015-01-01

    Application of neuro-augmentation technology based on dry-wireless EEG may be considerably beneficial for aviation and space operations because of the inherent dangers involved. In this study we evaluate classification performance of perceptual events using a dry-wireless EEG system during motion platform based flight simulation and actual flight in an open cockpit biplane to determine if the system can be used in the presence of considerable environmental and physiological artifacts. A passive task involving 200 random auditory presentations of a chirp sound was used for evaluation. The advantage of this auditory task is that it does not interfere with the perceptual motor processes involved with piloting the plane. Classification was based on identifying the presentation of a chirp sound vs. silent periods. Evaluation of Independent component analysis (ICA) and Kalman filtering to enhance classification performance by extracting brain activity related to the auditory event from other non-task related brain activity and artifacts was assessed. The results of permutation testing revealed that single trial classification of presence or absence of an auditory event was significantly above chance for all conditions on a novel test set. The best performance could be achieved with both ICA and Kalman filtering relative to no processing: Platform Off (83.4% vs. 78.3%), Platform On (73.1% vs. 71.6%), Biplane Engine Off (81.1% vs. 77.4%), and Biplane Engine On (79.2% vs. 66.1%). This experiment demonstrates that dry-wireless EEG can be used in environments with considerable vibration, wind, acoustic noise, and physiological artifacts and achieve good single trial classification performance that is necessary for future successful application of neuro-augmentation technology based on brain-machine interfaces.

  20. Interpretation of Aura satellite observations of CO and aerosol index related to the December 2006 Australia fires

    NASA Astrophysics Data System (ADS)

    Luo, M.; Boxe, C.; Jiang, J.; Nassar, R.; Livesey, N.

    2009-11-01

    Enhanced Carbon Monoxide (CO) in the upper troposphere (UT) is shown by collocated Tropospheric Emission Spectrometer (TES) and Microwave Limb Sounder (MLS) measurements near and down-wind from the known wildfire region of SE Australia from 12-19 December 2006. Enhanced UV aerosol index (AI) derived from Ozone Monitoring Instrument (OMI) measurements correlate with these high CO concentrations. HYSPLIT model back trajectories trace selected air parcels to the SE Australia fire region as their initial location, where TES observes enhanced CO in the upper and lower troposphere. Simultaneously, they show a lack of vertical advection along their tracks. TES retrieved CO vertical profiles in the higher and lower southern latitudes are examined together with the averaging kernels and show that TES CO retrievals are most sensitive at approximately 300-400 hPa. The enhanced CO observed by TES at the upper (215 hPa) and lower (681 hPa) troposphere are, therefore, influenced by mid-tropospheric CO. GEOS-Chem model simulations with an 8-day emission inventory, as the wildfire source over Australia, are sampled to the TES/MLS observation times and locations. These simulations only show CO enhancements in the lower troposphere near and down-wind from the wildfire region of SE Australia with drastic underestimates of UT CO. Although CloudSat along-track ice-water content curtains are examined to see whether possible vertical convection events can explain the high UT CO values, sparse observations of collocated Aura CO and CloudSat along-track ice-water content measurements for the single event precludes any conclusive correlation. Vertical convection that uplift fire-induced CO (i.e. most notably referred to as pyro-cumulonimbus, pyroCb) may provide an explanation for the incongruence between these simulations and the TES/MLS observations of enhanced CO in the UT. Future GEOS-Chem simulations are needed to validate this conjecture as the the PyroCb mechanism is currently not incorporated in GEOS-Chem.

  1. Identifiability and identification of trace continuous pollutant source.

    PubMed

    Qu, Hongquan; Liu, Shouwen; Pang, Liping; Hu, Tao

    2014-01-01

    Accidental pollution events often threaten people's health and lives, and a pollutant source is very necessary so that prompt remedial actions can be taken. In this paper, a trace continuous pollutant source identification method is developed to identify a sudden continuous emission pollutant source in an enclosed space. The location probability model is set up firstly, and then the identification method is realized by searching a global optimal objective value of the location probability. In order to discuss the identifiability performance of the presented method, a conception of a synergy degree of velocity fields is presented in order to quantitatively analyze the impact of velocity field on the identification performance. Based on this conception, some simulation cases were conducted. The application conditions of this method are obtained according to the simulation studies. In order to verify the presented method, we designed an experiment and identified an unknown source appearing in the experimental space. The result showed that the method can identify a sudden trace continuous source when the studied situation satisfies the application conditions.

  2. Modeling surface backgrounds from radon progeny plate-out

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumpilly, G.; Guiseppe, V. E.; Snyder, N.

    2013-08-08

    The next generation low-background detectors operating deep underground aim for unprecedented low levels of radioactive backgrounds. The surface deposition and subsequent implantation of radon progeny in detector materials will be a source of energetic background events. We investigate Monte Carlo and model-based simulations to understand the surface implantation profile of radon progeny. Depending on the material and region of interest of a rare event search, these partial energy depositions can be problematic. Motivated by the use of Ge crystals for the detection of neutrinoless double-beta decay, we wish to understand the detector response of surface backgrounds from radon progeny. Wemore » look at the simulation of surface decays using a validated implantation distribution based on nuclear recoils and a realistic surface texture. Results of the simulations and measured α spectra are presented.« less

  3. The distributed production system of the SuperB project: description and results

    NASA Astrophysics Data System (ADS)

    Brown, D.; Corvo, M.; Di Simone, A.; Fella, A.; Luppi, E.; Paoloni, E.; Stroili, R.; Tomassetti, L.

    2011-12-01

    The SuperB experiment needs large samples of MonteCarlo simulated events in order to finalize the detector design and to estimate the data analysis performances. The requirements are beyond the capabilities of a single computing farm, so a distributed production model capable of exploiting the existing HEP worldwide distributed computing infrastructure is needed. In this paper we describe the set of tools that have been developed to manage the production of the required simulated events. The production of events follows three main phases: distribution of input data files to the remote site Storage Elements (SE); job submission, via SuperB GANGA interface, to all available remote sites; output files transfer to CNAF repository. The job workflow includes procedures for consistency checking, monitoring, data handling and bookkeeping. A replication mechanism allows storing the job output on the local site SE. Results from 2010 official productions are reported.

  4. Systems Operation Studies for Automated Guideway Transit Systems : System Availability Model User's Manual

    DOT National Transportation Integrated Search

    1981-01-01

    The System Availability Model (SAM) is a system-level model which provides measures of vehicle and passenger availability. The SAM operates in conjunction with the AGT discrete Event Simulation Model (DESM). The DESM output is the normal source of th...

  5. Exercises in Persuasion.

    ERIC Educational Resources Information Center

    Schenck-Hamlin, William J.; And Others

    The 35 exercises presented in this paper have been designed to simulate real-life experiences involving the process of persuasion and to enhance understanding of the persuasive process. Among the aspects of the persuasive process dealt with are the identification of persuasive events, emotive language, language intensity, source credibility,…

  6. Adaptive Information Dissemination Control to Provide Diffdelay for the Internet of Things.

    PubMed

    Liu, Xiao; Liu, Anfeng; Huang, Changqin

    2017-01-12

    Applications running on the Internet of Things, such as the Wireless Sensor and Actuator Networks (WSANs) platform, generally have different quality of service (QoS) requirements. For urgent events, it is crucial that information be reported to the actuator quickly, and the communication cost is the second factor. However, for interesting events, communication costs, network lifetime and time all become important factors. In most situations, these different requirements cannot be satisfied simultaneously. In this paper, an adaptive communication control based on a differentiated delay (ACCDS) scheme is proposed to resolve this conflict. In an ACCDS, source nodes of events adaptively send various searching actuators routings (SARs) based on the degree of sensitivity to delay while maintaining the network lifetime. For a delay-sensitive event, the source node sends a large number of SARs to actuators to identify and inform the actuators in an extremely short time; thus, action can be taken quickly but at higher communication costs. For delay-insensitive events, the source node sends fewer SARs to reduce communication costs and improve network lifetime. Therefore, an ACCDS can meet the QoS requirements of different events using a differentiated delay framework. Theoretical analysis simulation results indicate that an ACCDS provides delay and communication costs and differentiated services; an ACCDS scheme can reduce the network delay by 11.111%-53.684% for a delay-sensitive event and reduce the communication costs by 5%-22.308% for interesting events, and reduce the network lifetime by about 28.713%.

  7. Adaptive Information Dissemination Control to Provide Diffdelay for the Internet of Things

    PubMed Central

    Liu, Xiao; Liu, Anfeng; Huang, Changqin

    2017-01-01

    Applications running on the Internet of Things, such as the Wireless Sensor and Actuator Networks (WSANs) platform, generally have different quality of service (QoS) requirements. For urgent events, it is crucial that information be reported to the actuator quickly, and the communication cost is the second factor. However, for interesting events, communication costs, network lifetime and time all become important factors. In most situations, these different requirements cannot be satisfied simultaneously. In this paper, an adaptive communication control based on a differentiated delay (ACCDS) scheme is proposed to resolve this conflict. In an ACCDS, source nodes of events adaptively send various searching actuators routings (SARs) based on the degree of sensitivity to delay while maintaining the network lifetime. For a delay-sensitive event, the source node sends a large number of SARs to actuators to identify and inform the actuators in an extremely short time; thus, action can be taken quickly but at higher communication costs. For delay-insensitive events, the source node sends fewer SARs to reduce communication costs and improve network lifetime. Therefore, an ACCDS can meet the QoS requirements of different events using a differentiated delay framework. Theoretical analysis simulation results indicate that an ACCDS provides delay and communication costs and differentiated services; an ACCDS scheme can reduce the network delay by 11.111%–53.684% for a delay-sensitive event and reduce the communication costs by 5%–22.308% for interesting events, and reduce the network lifetime by about 28.713%. PMID:28085097

  8. Building a Database for a Quantitative Model

    NASA Technical Reports Server (NTRS)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  9. Detection and characterization of debris avalanche and pyroclastic flow dynamics from the simulation of the seismic signal they generate: application to Montserrat, Lesser Antilles

    NASA Astrophysics Data System (ADS)

    Zhao, J.; Mangeney, A.; Moretti, L.; Stutzmann, E.; Calder, E. S.; Smith, P. J.; Capdeville, Y.; Le Friant, A.; Cole, P.; Luckett, R.; Robertson, R.

    2011-12-01

    Gravitational instabilities such as debris avalanches or pyroclastic flows represent one of the major natural hazards for populations who live in mountainous or volcanic areas. Detection and understanding of the dynamics of these events is crucial for risk assessment. Furthermore, during an eruption, a series of explosions and gravitational flows can occur, making it difficult to retrieve the characteristics of the individual gravitational events such as their volume, velocity, etc. In this context, the seismic signal generated by these events provides a unique tool to extract information on the history of the eruptive process and to validate gravitational flow models. We analyze here a series of events including explosions, debris avalanche and pyroclastic flows occurring in Montserrat in December 1997. This seismic signal is composed of six main pulses. The characteristics of the seismic signals generated by pyroclastic flows (amplitude, emergent onset, frequency spectrum, etc.) are described and linked to the volume of the individual events estimated from past field surveys. As a first step, we simulate the waveform of each event by assuming that the generation process reduces to a simple force applied at the surface of the topography. Going further, we perform detailed numerical simulation of the Boxing Day debris avalanche and of the following pyroclastic flow using a landslide model able to take into account the 3D topography. The stress field generated by the gravitational flows on the topography is then applied as surface boundary condition in a wave propagation model, making it possible to simulate the seismic signal generated by the avalanche and pyroclastic flow. Comparison between the simulated signal and the seismic signal recorded at the Puerto Rico seismic station located 450 km away from the source, show that this method allows us to reproduce the low frequency seismic signal and to constrain the volume and frictional behavior of the individual events. As a result, simulation of seismic signals generated by gravitational flows provides insight into the history of eruptive sequences and into the characteristics of the individual events.

  10. Utilizing Machine Learning for Analysis of Tiara for Texas

    NASA Astrophysics Data System (ADS)

    van Slycke, Jacqueline; Christian, Greg, , Dr.

    2017-09-01

    The Tiara for Texas detector at Texas A&M University consists of a target chamber housing an array of silicon detectors and surrounded by four high purity germanium clovers that generate voltage pulses proportional to detected gamma ray energies. While some radiation is fully absorbed in one photopeak, others undergo Compton scattering between detectors. This process is thoroughly simulated in GEANT4. Machine learning with scikit-learn allows for the reconstruction of scattered photons to the original energy of the incident gamma ray. In a given simulation, a defined number of rays are emitted from the source. Each ray is marked as an event and its path is tracked. Scikit-learn uses the events' paths to train an algorithm, which recognizes which events should be summed to reconstruct the full gamma ray energy and additional events to test the algorithm. These predictions are not exact, but were analyzed to further understand any discrepancies and increase the effectiveness of the simulation. The results from this research project compare various machine learning techniques to determine which methods should be expanded on in the future. National Science Foundation Grant PHY-1659847 and United States Department of Energy Grant DE-FG02-93ER40773.

  11. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.« less

  12. Simulation of Runoff Concentration on Arable Fields and the Impact of Adapted Tillage Practises

    NASA Astrophysics Data System (ADS)

    Winter, F.; Disse, M.

    2012-04-01

    Conservational tillage can reduce runoff on arable fields. Due to crop residues remaining on the fields a seasonal constant ground cover is achieved. This additional soil cover not only decreases the drying of the topsoil but also reduces the mechanical impact of raindrops and the possibly resulting soil crust. Further implications of the mulch layer can be observed during heavy precipitation events and occurring surface runoff. The natural roughness of the ground surface is further increased and thus the flow velocity is decreased, resulting in an enhanced ability of runoff to infiltrate into the soil (so called Runon-Infiltration). The hydrological model system WaSiM-ETH hitherto simulates runoff concentration by a flow time grid in the catchment, which is derived from topographical features of the catchment during the preprocessing analysis. The retention of both surface runoff and interflow is modelled by a single reservoir in every discrete flow time zone until the outlet of a subcatchment is reached. For a more detailed analysis of the flow paths in catchments of the lower mesoscale (< 1 km2) the model was extended by a kinematic wave approach for the surface runoff concentration. This allows the simulation of small-scale variation in runoff generation and its temporal distribution in detail. Therefore the assessment of adapted tillage systems can be derived. On singular fields of the Scheyern research farm north-west of Munich it can be shown how different crops and tillage practises can influence runoff generation and concentration during single heavy precipitation events. From the simulation of individual events in agricultural areas of the lower mesoscale hydrologically susceptible areas can be identified and the positive impact of an adapted agricultural management on runoff generation and concentration can be quantifed.

  13. Assessing manure management strategies through small-plot research and whole-farm modeling

    USGS Publications Warehouse

    Garcia, A.M.; Veith, T.L.; Kleinman, P.J.A.; Rotz, C.A.; Saporito, L.S.

    2008-01-01

    Plot-scale experimentation can provide valuable insight into the effects of manure management practices on phosphorus (P) runoff, but whole-farm evaluation is needed for complete assessment of potential trade offs. Artificially-applied rainfall experimentation on small field plots and event-based and long-term simulation modeling were used to compare P loss in runoff related to two dairy manure application methods (surface application with and without incorporation by tillage) on contrasting Pennsylvania soils previously under no-till management. Results of single-event rainfall experiments indicated that average dissolved reactive P losses in runoff from manured plots decreased by up to 90% with manure incorporation while total P losses did not change significantly. Longer-term whole farm simulation modeling indicated that average dissolved reactive P losses would decrease by 8% with manure incorporation while total P losses would increase by 77% due to greater erosion from fields previously under no-till. Differences in the two methods of inference point to the need for caution in extrapolating research findings. Single-event rainfall experiments conducted shortly after manure application simulate incidental transfers of dissolved P in manure to runoff, resulting in greater losses of dissolved reactive P. However, the transfer of dissolved P in applied manure diminishes with time. Over the annual time frame simulated by whole farm modeling, erosion processes become more important to runoff P losses. Results of this study highlight the need to consider the potential for increased erosion and total P losses caused by soil disturbance during incorporation. This study emphasizes the ability of modeling to estimate management practice effectiveness at the larger scales when experimental data is not available.

  14. Could the Hokusai Impact Have Delivered Mercury's Water Ice?

    NASA Astrophysics Data System (ADS)

    Ernst, C. M.; Chabot, N. L.; Barnouin, O. S.

    2018-05-01

    Hokusai is the best candidate source crater for Mercury’s water-ice inventory if it was primarily delivered by a single impact event. The Hokusai impact could account for the inventory of water ice on Mercury for impact velocities <30 km/s.

  15. Hadronic energy resolution of a highly granular scintillator-steel hadron calorimeter using software compensation techniques

    NASA Astrophysics Data System (ADS)

    Adloff, C.; Blaha, J.; Blaising, J.-J.; Drancourt, C.; Espargilière, A.; Gaglione, R.; Geffroy, N.; Karyotakis, Y.; Prast, J.; Vouters, G.; Francis, K.; Repond, J.; Smith, J.; Xia, L.; Baldolemar, E.; Li, J.; Park, S. T.; Sosebee, M.; White, A. P.; Yu, J.; Buanes, T.; Eigen, G.; Mikami, Y.; Watson, N. K.; Goto, T.; Mavromanolakis, G.; Thomson, M. A.; Ward, D. R.; Yan, W.; Benchekroun, D.; Hoummada, A.; Khoulaki, Y.; Benyamna, M.; Cârloganu, C.; Fehr, F.; Gay, P.; Manen, S.; Royer, L.; Blazey, G. C.; Dyshkant, A.; Lima, J. G. R.; Zutshi, V.; Hostachy, J.-Y.; Morin, L.; Cornett, U.; David, D.; Falley, G.; Gadow, K.; Göttlicher, P.; Günter, C.; Hermberg, B.; Karstensen, S.; Krivan, F.; Lucaci-Timoce, A.-I.; Lu, S.; Lutz, B.; Morozov, S.; Morgunov, V.; Reinecke, M.; Sefkow, F.; Smirnov, P.; Terwort, M.; Vargas-Trevino, A.; Feege, N.; Garutti, E.; Marchesini, I.; Ramilli, M.; Eckert, P.; Harion, T.; Kaplan, A.; Schultz-Coulon, H.-Ch; Shen, W.; Stamen, R.; Tadday, A.; Bilki, B.; Norbeck, E.; Onel, Y.; Wilson, G. W.; Kawagoe, K.; Dauncey, P. D.; Magnan, A.-M.; Wing, M.; Salvatore, F.; Calvo Alamillo, E.; Fouz, M.-C.; Puerta-Pelayo, J.; Balagura, V.; Bobchenko, B.; Chadeeva, M.; Danilov, M.; Epifantsev, A.; Markin, O.; Mizuk, R.; Novikov, E.; Rusinov, V.; Tarkovsky, E.; Kirikova, N.; Kozlov, V.; Smirnov, P.; Soloviev, Y.; Buzhan, P.; Dolgoshein, B.; Ilyin, A.; Kantserov, V.; Kaplin, V.; Karakash, A.; Popova, E.; Smirnov, S.; Kiesling, C.; Pfau, S.; Seidel, K.; Simon, F.; Soldner, C.; Szalay, M.; Tesar, M.; Weuste, L.; Bonis, J.; Bouquet, B.; Callier, S.; Cornebise, P.; Doublet, Ph; Dulucq, F.; Faucci Giannelli, M.; Fleury, J.; Li, H.; Martin-Chassard, G.; Richard, F.; de la Taille, Ch; Pöschl, R.; Raux, L.; Seguin-Moreau, N.; Wicek, F.; Anduze, M.; Boudry, V.; Brient, J.-C.; Jeans, D.; Mora de Freitas, P.; Musat, G.; Reinhard, M.; Ruan, M.; Videau, H.; Bulanek, B.; Zacek, J.; Cvach, J.; Gallus, P.; Havranek, M.; Janata, M.; Kvasnicka, J.; Lednicky, D.; Marcisovsky, M.; Polak, I.; Popule, J.; Tomasek, L.; Tomasek, M.; Ruzicka, P.; Sicho, P.; Smolik, J.; Vrba, V.; Zalesak, J.; Belhorma, B.; Ghazlane, H.; Takeshita, T.; Uozumi, S.; Sauer, J.; Weber, S.; Zeitnitz, C.

    2012-09-01

    The energy resolution of a highly granular 1 m3 analogue scintillator-steel hadronic calorimeter is studied using charged pions with energies from 10 GeV to 80 GeV at the CERN SPS. The energy resolution for single hadrons is determined to be approximately 58%/√E/GeV. This resolution is improved to approximately 45%/√E/GeV with software compensation techniques. These techniques take advantage of the event-by-event information about the substructure of hadronic showers which is provided by the imaging capabilities of the calorimeter. The energy reconstruction is improved either with corrections based on the local energy density or by applying a single correction factor to the event energy sum derived from a global measure of the shower energy density. The application of the compensation algorithms to geant4 simulations yield resolution improvements comparable to those observed for real data.

  16. Genetic consequences of sequential founder events by an island-colonizing bird.

    PubMed

    Clegg, Sonya M; Degnan, Sandie M; Kikkawa, Jiro; Moritz, Craig; Estoup, Arnaud; Owens, Ian P F

    2002-06-11

    The importance of founder events in promoting evolutionary changes on islands has been a subject of long-running controversy. Resolution of this debate has been hindered by a lack of empirical evidence from naturally founded island populations. Here we undertake a genetic analysis of a series of historically documented, natural colonization events by the silvereye species-complex (Zosterops lateralis), a group used to illustrate the process of island colonization in the original founder effect model. Our results indicate that single founder events do not affect levels of heterozygosity or allelic diversity, nor do they result in immediate genetic differentiation between populations. Instead, four to five successive founder events are required before indices of diversity and divergence approach that seen in evolutionarily old forms. A Bayesian analysis based on computer simulation allows inferences to be made on the number of effective founders and indicates that founder effects are weak because island populations are established from relatively large flocks. Indeed, statistical support for a founder event model was not significantly higher than for a gradual-drift model for all recently colonized islands. Taken together, these results suggest that single colonization events in this species complex are rarely accompanied by severe founder effects, and multiple founder events and/or long-term genetic drift have been of greater consequence for neutral genetic diversity.

  17. Atomistic simulation of shocks in single crystal and polycrystalline Ta

    NASA Astrophysics Data System (ADS)

    Bringa, E. M.; Higginbotham, A.; Park, N.; Tang, Y.; Suggit, M.; Mogni, G.; Ruestes, C. J.; Hawreliak, J.; Erhart, P.; Meyers, M. A.; Wark, J. S.

    2011-06-01

    Non-equilibrium molecular dynamics (MD) simulations of shocks in Ta single crystals and polycrystals were carried out using up to 360 million atoms. Several EAM and FS type potentials were tested up to 150 GPa, with varying success reproducing the Hugoniot and the behavior of elastic constants under pressure. Phonon modes were studied to exclude possible plasticity nucleation by soft-phonon modes, as observed in MD simulations of Cu crystals. The effect of loading rise time in the resulting microstructure was studied for ramps up to 0.2 ns long. Dislocation activity was not observed in single crystals, unless there were defects acting as dislocation sources above a certain pressure. E.M.B. was funded by CONICET, Agencia Nacional de Ciencia y Tecnología (PICT2008-1325), and a Royal Society International Joint Project award.

  18. Computer Simulation for Calculating the Second-Order Correlation Function of Classical and Quantum Light

    ERIC Educational Resources Information Center

    Facao, M.; Lopes, A.; Silva, A. L.; Silva, P.

    2011-01-01

    We propose an undergraduate numerical project for simulating the results of the second-order correlation function as obtained by an intensity interference experiment for two kinds of light, namely bunched light with Gaussian or Lorentzian power density spectrum and antibunched light obtained from single-photon sources. While the algorithm for…

  19. Atmospheric simulator and calibration system for remote sensing radiometers

    NASA Technical Reports Server (NTRS)

    Holland, J. A.

    1983-01-01

    A system for calibrating the MAPS (measurement of air pollution from satellites) instruments was developed. The design of the system provides a capability for simulating a broad range of radiant energy source temperatures and a broad range of atmospheric pressures, temperatures, and pollutant concentrations for a single slab atmosphere. The system design and the system operation are described.

  20. Multiplicity counting from fission detector signals with time delay effects

    NASA Astrophysics Data System (ADS)

    Nagy, L.; Pázsit, I.; Pál, L.

    2018-03-01

    In recent work, we have developed the theory of using the first three auto- and joint central moments of the currents of up to three fission chambers to extract the singles, doubles and triples count rates of traditional multiplicity counting (Pázsit and Pál, 2016; Pázsit et al., 2016). The objective is to elaborate a method for determining the fissile mass, neutron multiplication, and (α, n) neutron emission rate of an unknown assembly of fissile material from the statistics of the fission chamber signals, analogous to the traditional multiplicity counting methods with detectors in the pulse mode. Such a method would be an alternative to He-3 detector systems, which would be free from the dead time problems that would be encountered in high counting rate applications, for example the assay of spent nuclear fuel. A significant restriction of our previous work was that all neutrons born in a source event (spontaneous fission) were assumed to be detected simultaneously, which is not fulfilled in reality. In the present work, this restriction is eliminated, by assuming an independent, identically distributed random time delay for all neutrons arising from one source event. Expressions are derived for the same auto- and joint central moments of the detector current(s) as in the previous case, expressed with the singles, doubles, and triples (S, D and T) count rates. It is shown that if the time-dispersion of neutron detections is of the same order of magnitude as the detector pulse width, as they typically are in measurements of fast neutrons, the multiplicity rates can still be extracted from the moments of the detector current, although with more involved calibration factors. The presented formulae, and hence also the performance of the proposed method, are tested by both analytical models of the time delay as well as with numerical simulations. Methods are suggested also for the modification of the method for large time delay effects (for thermalised neutrons).

  1. Transition model for ricin-aptamer interactions with multiple pathways and energy barriers

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Xu, Bingqian

    2014-02-01

    We develop a transition model to interpret single-molecule ricin-aptamer interactions with multiple unbinding pathways and energy barriers measured by atomic force microscopy dynamic force spectroscopy. Molecular simulations establish the relationship between binding conformations and the corresponding unbinding pathways. Each unbinding pathway follows a Bell-Evans multiple-barrier model. Markov-type transition matrices are developed to analyze the redistribution of unbinding events among the pathways under different loading rates. Our study provides detailed information about complex behaviors in ricin-aptamer unbinding events.

  2. Reconstructed Image Spatial Resolution of Multiple Coincidences Compton Imager

    NASA Astrophysics Data System (ADS)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2010-02-01

    We study the multiple coincidences Compton imager (MCCI) which is based on a simultaneous acquisition of several photons emitted in cascade from a single nuclear decay. Theoretically, this technique should provide a major improvement in localization of a single radioactive source as compared to a standard Compton camera. In this work, we investigated the performance and limitations of MCCI using Monte Carlo computer simulations. Spatial resolutions of the reconstructed point source have been studied as a function of the MCCI parameters, including geometrical dimensions and detector characteristics such as materials, energy and spatial resolutions.

  3. Towards real-time regional earthquake simulation I: real-time moment tensor monitoring (RMT) for regional events in Taiwan

    NASA Astrophysics Data System (ADS)

    Lee, Shiann-Jong; Liang, Wen-Tzong; Cheng, Hui-Wen; Tu, Feng-Shan; Ma, Kuo-Fong; Tsuruoka, Hiroshi; Kawakatsu, Hitoshi; Huang, Bor-Shouh; Liu, Chun-Chi

    2014-01-01

    We have developed a real-time moment tensor monitoring system (RMT) which takes advantage of a grid-based moment tensor inversion technique and real-time broad-band seismic recordings to automatically monitor earthquake activities in the vicinity of Taiwan. The centroid moment tensor (CMT) inversion technique and a grid search scheme are applied to obtain the information of earthquake source parameters, including the event origin time, hypocentral location, moment magnitude and focal mechanism. All of these source parameters can be determined simultaneously within 117 s after the occurrence of an earthquake. The monitoring area involves the entire Taiwan Island and the offshore region, which covers the area of 119.3°E to 123.0°E and 21.0°N to 26.0°N, with a depth from 6 to 136 km. A 3-D grid system is implemented in the monitoring area with a uniform horizontal interval of 0.1° and a vertical interval of 10 km. The inversion procedure is based on a 1-D Green's function database calculated by the frequency-wavenumber (fk) method. We compare our results with the Central Weather Bureau (CWB) catalogue data for earthquakes occurred between 2010 and 2012. The average differences between event origin time and hypocentral location are less than 2 s and 10 km, respectively. The focal mechanisms determined by RMT are also comparable with the Broadband Array in Taiwan for Seismology (BATS) CMT solutions. These results indicate that the RMT system is realizable and efficient to monitor local seismic activities. In addition, the time needed to obtain all the point source parameters is reduced substantially compared to routine earthquake reports. By connecting RMT with a real-time online earthquake simulation (ROS) system, all the source parameters will be forwarded to the ROS to make the real-time earthquake simulation feasible. The RMT has operated offline (2010-2011) and online (since January 2012 to present) at the Institute of Earth Sciences (IES), Academia Sinica (http://rmt.earth.sinica.edu.tw). The long-term goal of this system is to provide real-time source information for rapid seismic hazard assessment during large earthquakes.

  4. Nitrogen Accumulation and Partitioning in High Arctic Tundra from Extreme Atmospheric N Deposition Events

    NASA Astrophysics Data System (ADS)

    Phoenix, G. K.; Osborn, A.; Blaud, A.; Press, M. C.; Choudhary, S.

    2013-12-01

    Arctic ecosystems are threatened by pollution from extreme atmospheric nitrogen (N) deposition events. These events occur from the long-range transport of reactive N from pollution sources at lower latitudes and can deposit up to 80% of the annual N deposition in just a few days. To date, the fate and impacts of these extreme pollutant events has remained unknown. Using a field simulation study, we undertook the first assessment of the fate of acutely deposited N on arctic tundra. Extreme N deposition events were simulated on field plots at Ny-Ålesund, Svalbard (79oN) at rates of 0, 0.04, 0.4 and 1.2 g N m-2 yr-1 applied as NH4NO3 solution over 4 days, with 15N tracers used in the second year to quantify the fate of the deposited N in the plant, soil, microbial and leachate pools. Separate applications of 15NO3- and 15NH4+ were also made to determine the importance of N form in the fate of N. Recovery of the 15N tracer at the end of the first growing season approached 100% of the 15N applied irrespective of treatment level, demonstrating the considerable capacity of High Arctic tundra to capture pollutant N from extreme deposition events. Most incorporation of the 15N was found in bryophytes, followed by the dominant vascular plant (Salix polaris) and the microbial biomass of the soil organic layer. Total recovery remained high in the second growing season (average of 90%), indicating highly conservative N retention. Between the two N forms, recovery of 15NO3- and 15NH4+ were equal in the non-vascular plants, whereas in the vascular plants (particularly Salix polaris) recovery of 15NO3- was four times higher than of 15NH4+. Overall, these findings show that High Arctic tundra has considerable capacity to capture and retain the pollutant N deposited in acute extreme deposition events. Given they can represent much of the annual N deposition, extreme deposition events may be more important than increased chronic N deposition as a pollution source. Furthermore, current extreme N deposition events -and the predicted future increase in extreme deposition events- may represent an important source of eutrophication to 'pristine' arctic tundra.

  5. Single-trial event-related potential extraction through one-unit ICA-with-reference

    NASA Astrophysics Data System (ADS)

    Lih Lee, Wee; Tan, Tele; Falkmer, Torbjörn; Leung, Yee Hong

    2016-12-01

    Objective. In recent years, ICA has been one of the more popular methods for extracting event-related potential (ERP) at the single-trial level. It is a blind source separation technique that allows the extraction of an ERP without making strong assumptions on the temporal and spatial characteristics of an ERP. However, the problem with traditional ICA is that the extraction is not direct and is time-consuming due to the need for source selection processing. In this paper, the application of an one-unit ICA-with-Reference (ICA-R), a constrained ICA method, is proposed. Approach. In cases where the time-region of the desired ERP is known a priori, this time information is utilized to generate a reference signal, which is then used for guiding the one-unit ICA-R to extract the source signal of the desired ERP directly. Main results. Our results showed that, as compared to traditional ICA, ICA-R is a more effective method for analysing ERP because it avoids manual source selection and it requires less computation thus resulting in faster ERP extraction. Significance. In addition to that, since the method is automated, it reduces the risks of any subjective bias in the ERP analysis. It is also a potential tool for extracting the ERP in online application.

  6. Single-trial event-related potential extraction through one-unit ICA-with-reference.

    PubMed

    Lee, Wee Lih; Tan, Tele; Falkmer, Torbjörn; Leung, Yee Hong

    2016-12-01

    In recent years, ICA has been one of the more popular methods for extracting event-related potential (ERP) at the single-trial level. It is a blind source separation technique that allows the extraction of an ERP without making strong assumptions on the temporal and spatial characteristics of an ERP. However, the problem with traditional ICA is that the extraction is not direct and is time-consuming due to the need for source selection processing. In this paper, the application of an one-unit ICA-with-Reference (ICA-R), a constrained ICA method, is proposed. In cases where the time-region of the desired ERP is known a priori, this time information is utilized to generate a reference signal, which is then used for guiding the one-unit ICA-R to extract the source signal of the desired ERP directly. Our results showed that, as compared to traditional ICA, ICA-R is a more effective method for analysing ERP because it avoids manual source selection and it requires less computation thus resulting in faster ERP extraction. In addition to that, since the method is automated, it reduces the risks of any subjective bias in the ERP analysis. It is also a potential tool for extracting the ERP in online application.

  7. Validation of ground-motion simulations for historical events using SDoF systems

    USGS Publications Warehouse

    Galasso, C.; Zareian, F.; Iervolino, I.; Graves, R.W.

    2012-01-01

    The study presented in this paper is among the first in a series of studies toward the engineering validation of the hybrid broadband ground‐motion simulation methodology by Graves and Pitarka (2010). This paper provides a statistical comparison between seismic demands of single degree of freedom (SDoF) systems subjected to past events using simulations and actual recordings. A number of SDoF systems are selected considering the following: (1) 16 oscillation periods between 0.1 and 6 s; (2) elastic case and four nonlinearity levels, from mildly inelastic to severely inelastic systems; and (3) two hysteretic behaviors, in particular, nondegrading–nonevolutionary and degrading–evolutionary. Demand spectra are derived in terms of peak and cyclic response, as well as their statistics for four historical earthquakes: 1979 Mw 6.5 Imperial Valley, 1989 Mw 6.8 Loma Prieta, 1992 Mw 7.2 Landers, and 1994 Mw 6.7 Northridge.

  8. Physical Processes and Applications of the Monte Carlo Radiative Energy Deposition (MRED) Code

    NASA Astrophysics Data System (ADS)

    Reed, Robert A.; Weller, Robert A.; Mendenhall, Marcus H.; Fleetwood, Daniel M.; Warren, Kevin M.; Sierawski, Brian D.; King, Michael P.; Schrimpf, Ronald D.; Auden, Elizabeth C.

    2015-08-01

    MRED is a Python-language scriptable computer application that simulates radiation transport. It is the computational engine for the on-line tool CRÈME-MC. MRED is based on c++ code from Geant4 with additional Fortran components to simulate electron transport and nuclear reactions with high precision. We provide a detailed description of the structure of MRED and the implementation of the simulation of physical processes used to simulate radiation effects in electronic devices and circuits. Extensive discussion and references are provided that illustrate the validation of models used to implement specific simulations of relevant physical processes. Several applications of MRED are summarized that demonstrate its ability to predict and describe basic physical phenomena associated with irradiation of electronic circuits and devices. These include effects from single particle radiation (including both direct ionization and indirect ionization effects), dose enhancement effects, and displacement damage effects. MRED simulations have also helped to identify new single event upset mechanisms not previously observed by experiment, but since confirmed, including upsets due to muons and energetic electrons.

  9. Simulation of a Single-Element Lean-Direct Injection Combustor Using a Polyhedral Mesh Derived from Hanging-Node Elements

    NASA Technical Reports Server (NTRS)

    Wey, Thomas; Liu, Nan-Suey

    2013-01-01

    This paper summarizes the procedures of generating a polyhedral mesh derived from hanging-node elements as well as presents sample results from its application to the numerical solution of a single element lean direct injection (LDI) combustor using an open-source version of the National Combustion Code (NCC).

  10. Comparison of data transformation procedures to enhance topographical accuracy in time-series analysis of the human EEG.

    PubMed

    Hauk, O; Keil, A; Elbert, T; Müller, M M

    2002-01-30

    We describe a methodology to apply current source density (CSD) and minimum norm (MN) estimation as pre-processing tools for time-series analysis of single trial EEG data. The performance of these methods is compared for the case of wavelet time-frequency analysis of simulated gamma-band activity. A reasonable comparison of CSD and MN on the single trial level requires regularization such that the corresponding transformed data sets have similar signal-to-noise ratios (SNRs). For region-of-interest approaches, it should be possible to optimize the SNR for single estimates rather than for the whole distributed solution. An effective implementation of the MN method is described. Simulated data sets were created by modulating the strengths of a radial and a tangential test dipole with wavelets in the frequency range of the gamma band, superimposed with simulated spatially uncorrelated noise. The MN and CSD transformed data sets as well as the average reference (AR) representation were subjected to wavelet frequency-domain analysis, and power spectra were mapped for relevant frequency bands. For both CSD and MN, the influence of noise can be sufficiently suppressed by regularization to yield meaningful information, but only MN represents both radial and tangential dipole sources appropriately as single peaks. Therefore, when relating wavelet power spectrum topographies to their neuronal generators, MN should be preferred.

  11. Multiple Sensing Application on Wireless Sensor Network Simulation using NS3

    NASA Astrophysics Data System (ADS)

    Kurniawan, I. F.; Bisma, R.

    2018-01-01

    Hardware enhancement provides opportunity to install various sensor device on single monitoring node which then enables users to acquire multiple data simultaneously. Constructing multiple sensing application in NS3 is a challenging task since numbers of aspects such as wireless communication, packet transmission pattern, and energy model must be taken into account. Despite of numerous types of monitoring data available, this study only considers two types such as periodic, and event-based data. Periodical data will generate monitoring data follows configured interval, while event-based transmit data when certain determined condition is met. Therefore, this study attempts to cover mentioned aspects in NS3. Several simulations are performed with different number of nodes on arbitrary communication scheme.

  12. Adapting the serial Alpgen parton-interaction generator to simulate LHC collisions on millions of parallel threads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.

    As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the World- wide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less

  13. Adapting the serial Alpgen parton-interaction generator to simulate LHC collisions on millions of parallel threads

    DOE PAGES

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.; ...

    2016-09-29

    As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less

  14. Adapting the serial Alpgen parton-interaction generator to simulate LHC collisions on millions of parallel threads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.

    As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. Finally, this paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application andmore » the performance that was achieved.« less

  15. Testing the anisotropy of the universe using the simulated gravitational wave events from advanced LIGO and Virgo

    NASA Astrophysics Data System (ADS)

    Lin, Hai-Nan; Li, Jin; Li, Xin

    2018-05-01

    The detection of gravitational waves (GWs) provides a powerful tool to constrain the cosmological parameters. In this paper, we investigate the possibility of using GWs as standard sirens in testing the anisotropy of the universe. We consider the GW signals produced by the coalescence of binary black hole systems and simulate hundreds of GW events from the advanced laser interferometer gravitational-wave observatory and Virgo. It is found that the anisotropy of the universe can be tightly constrained if the redshift of the GW source is precisely known. The anisotropic amplitude can be constrained with an accuracy comparable to the Union2.1 complication of type-Ia supernovae if ≳ 400 GW events are observed. As for the preferred direction, ≳ 800 GW events are needed in order to achieve the accuracy of Union2.1. With 800 GW events, the probability of pseudo anisotropic signals with an amplitude comparable to Union2.1 is negligible. These results show that GWs can provide a complementary tool to supernovae in testing the anisotropy of the universe.

  16. Yield and depth Estimation of Selected NTS Nuclear and SPE Chemical Explosions Using Source Equalization by modeling Local and Regional Seismograms (Invited)

    NASA Astrophysics Data System (ADS)

    Saikia, C. K.; Roman-nieves, J. I.; Woods, M. T.

    2013-12-01

    Source parameters of nuclear and chemical explosions are often estimated by matching either the corner frequency and spectral level of a single event or the spectral ratio when spectra from two events are available with known source parameters for one. In this study, we propose an alternative method in which waveforms from two or more events can be simultaneously equalized by setting the differential of the processed seismograms at one station from any two individual events to zero. The method involves convolving the equivalent Mueller-Murphy displacement source time function (MMDSTF) of one event with the seismogram of the second event and vice-versa, and then computing their difference seismogram. MMDSTF is computed at the elastic radius including both near and far-field terms. For this method to yield accurate source parameters, an inherent assumption is that green's functions for the any paired events from the source to a receiver are same. In the frequency limit of the seismic data, this is a reasonable assumption and is concluded based on the comparison of green's functions computed for flat-earth models at various source depths ranging from 100m to 1Km. Frequency domain analysis of the initial P wave is, however, sensitive to the depth phase interaction, and if tracked meticulously can help estimating the event depth. We applied this method to the local waveforms recorded from the three SPE shots and precisely determined their yields. These high-frequency seismograms exhibit significant lateral path effects in spectrogram analysis and 3D numerical computations, but the source equalization technique is independent of any variation as long as their instrument characteristics are well preserved. We are currently estimating the uncertainty in the derived source parameters assuming the yields of the SPE shots as unknown. We also collected regional waveforms from 95 NTS explosions at regional stations ALQ, ANMO, CMB, COR, JAS LON, PAS, PFO and RSSD. We are currently employing a station based analysis using the equalization technique to estimate depth and yields of many relative to those of the announced explosions; and to develop their relationship with the Mw and Mo for the NTS explosions.

  17. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation.

    PubMed

    Kim, Sangroh; Yoshizumi, Terry T; Yin, Fang-Fang; Chetty, Indrin J

    2013-04-21

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan-scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the 'ISource = 8: Phase-Space Source Incident from Multiple Directions' in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.

  18. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation

    NASA Astrophysics Data System (ADS)

    Kim, Sangroh; Yoshizumi, Terry T.; Yin, Fang-Fang; Chetty, Indrin J.

    2013-04-01

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan—scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the ‘ISource = 8: Phase-Space Source Incident from Multiple Directions’ in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.

  19. Integrating Low-Cost Mems Accelerometer Mini-Arrays (mama) in Earthquake Early Warning Systems

    NASA Astrophysics Data System (ADS)

    Nof, R. N.; Chung, A. I.; Rademacher, H.; Allen, R. M.

    2016-12-01

    Current operational Earthquake Early Warning Systems (EEWS) acquire data with networks of single seismic stations, and compute source parameters assuming earthquakes to be point sources. For large events, the point-source assumption leads to an underestimation of magnitude, and the use of single stations leads to large uncertainties in the locations of events outside the network. We propose the use of mini-arrays to improve EEWS. Mini-arrays have the potential to: (a) estimate reliable hypocentral locations by beam forming (FK-analysis) techniques; (b) characterize the rupture dimensions and account for finite-source effects, leading to more reliable estimates for large magnitudes. Previously, the high price of multiple seismometers has made creating arrays cost-prohibitive. However, we propose setting up mini-arrays of a new seismometer based on low-cost (<$150), high-performance MEMS accelerometer around conventional seismic stations. The expected benefits of such an approach include decreasing alert-times, improving real-time shaking predictions and mitigating false alarms. We use low-resolution 14-bit Quake Catcher Network (QCN) data collected during Rapid Aftershock Mobilization Program (RAMP) in Christchurch, NZ following the M7.1 Darfield earthquake in September 2010. As the QCN network was so dense, we were able to use small sub-array of up to ten sensors spread along a maximum area of 1.7x2.2 km2 to demonstrate our approach and to solve for the BAZ of two events (Mw4.7 and Mw5.1) with less than ±10° error. We will also present the new 24-bit device details, benchmarks, and real-time measurements.

  20. Characterization of Methane Emission Sources Using Genetic Algorithms and Atmospheric Transport Modeling

    NASA Astrophysics Data System (ADS)

    Cao, Y.; Cervone, G.; Barkley, Z.; Lauvaux, T.; Deng, A.; Miles, N.; Richardson, S.

    2016-12-01

    Fugitive methane emission rates for the Marcellus shale area are estimated using a genetic algorithm that finds optimal weights to minimize the error between simulated and observed concentrations. The overall goal is to understand the relative contribution of methane due to Shale gas extraction. Methane sensors were installed on four towers located in northeastern Pennsylvania to measure atmospheric concentrations since May 2015. Inverse Lagrangian dispersion model runs are performed from each of these tower locations for each hour of 2015. Simulated methane concentrations at each of the four towers are computed by multiplying the resulting footprints from the atmospheric simulations by thousands of emission sources grouped into 11 classes. The emission sources were identified using GIS techniques, and include conventional and unconventional wells, different types of compressor stations, pipelines, landfills, farming and wetlands. Initial estimates for each source are calculated based on emission factors from EPA and few regional studies. A genetic algorithm is then used to identify optimal emission rates for the 11 classes of methane emissions and to explore extreme events and spatial and temporal structures in the emissions associated with natural gas activities.

  1. Investigating middle-atmospheric gravity waves associated with a sprite-producing mesoscale convective event

    NASA Astrophysics Data System (ADS)

    Vollmer, D. R.; McHarg, M. G.; Harley, J.; Haaland, R. K.; Stenbaek-Nielsen, H.

    2016-12-01

    On 23 July 2014, a mesoscale convective event over western Nebraska produced a large number of sprites. One frame per second images obtained from a low-noise Andor Scientific CMOS camera showed regularly-spaced horizontal striations in the airglow both before and during several of the sprite events, suggesting the presence of vertically-propagating gravity waves in the middle atmosphere. Previous work hypothesized that the gravity waves were produced by the thunderstorm itself. We compare our observations with previous work, and present numerical simulations conducted to determine source, structure, and propagation of atmospheric gravity waves.

  2. Optimal Run Strategies in Monte Carlo Iterated Fission Source Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Lund, Amanda L.; Siegel, Andrew R.

    2017-06-19

    The method of successive generations used in Monte Carlo simulations of nuclear reactor models is known to suffer from intergenerational correlation between the spatial locations of fission sites. One consequence of the spatial correlation is that the convergence rate of the variance of the mean for a tally becomes worse than O(N–1). In this work, we consider how the true variance can be minimized given a total amount of work available as a function of the number of source particles per generation, the number of active/discarded generations, and the number of independent simulations. We demonstrate through both analysis and simulationmore » that under certain conditions the solution time for highly correlated reactor problems may be significantly reduced either by running an ensemble of multiple independent simulations or simply by increasing the generation size to the extent that it is practical. However, if too many simulations or too large a generation size is used, the large fraction of source particles discarded can result in an increase in variance. We also show that there is a strong incentive to reduce the number of generations discarded through some source convergence acceleration technique. Furthermore, we discuss the efficient execution of large simulations on a parallel computer; we argue that several practical considerations favor using an ensemble of independent simulations over a single simulation with very large generation size.« less

  3. Modeling atmospheric effects - an assessment of the problems

    Treesearch

    Douglas G. Fox

    1976-01-01

    Our ability to simulate atmospheric processes that affect the life cycle of pollution is reviewed. The transport process is considered on three scales (a) the near-source or single-plume dispersion problem, (b) the multiple-source dispersion problem, and (c) the long-range transport. Modeling the first of these is shown to be well within the capability of generally...

  4. Evaluation of Precipitation Simulated by Seven SCMs against the ARM Observations at the SGP Site

    NASA Technical Reports Server (NTRS)

    Song, Hua; Lin, Wuyin; Lin, Yanluan; Wolf, Audrey B.; Neggers, Roel; Donner, Leo J.; Del Genio, Anthony D.; Liu, Yangang

    2013-01-01

    This study evaluates the performances of seven single-column models (SCMs) by comparing simulated surface precipitation with observations at the Atmospheric Radiation Measurement Program Southern Great Plains (SGP) site from January 1999 to December 2001. Results show that although most SCMs can reproduce the observed precipitation reasonably well, there are significant and interesting differences in their details. In the cold season, the model-observation differences in the frequency and mean intensity of rain events tend to compensate each other for most SCMs. In the warm season, most SCMs produce more rain events in daytime than in nighttime, whereas the observations have more rain events in nighttime. The mean intensities of rain events in these SCMs are much stronger in daytime, but weaker in nighttime, than the observations. The higher frequency of rain events during warm-season daytime in most SCMs is related to the fact that most SCMs produce a spurious precipitation peak around the regime of weak vertical motions but rich in moisture content. The models also show distinct biases between nighttime and daytime in simulating significant rain events. In nighttime, all the SCMs have a lower frequency of moderate-to-strong rain events than the observations for both seasons. In daytime, most SCMs have a higher frequency of moderate-to-strong rain events than the observations, especially in the warm season. Further analysis reveals distinct meteorological backgrounds for large underestimation and overestimation events. The former occur in the strong ascending regimes with negative low-level horizontal heat and moisture advection, whereas the latter occur in the weak or moderate ascending regimes with positive low-level horizontal heat and moisture advection.

  5. Simulation of the fate of Boscalid and its transformation product 4-Chlorobenzoic acid in a vineyard-terraces catchment

    NASA Astrophysics Data System (ADS)

    Vollert, Dieter; Gassmann, Matthias; Olsson, Oliver; Kümmerer, Klaus

    2017-04-01

    In the viniculture fungicides are commonly applied foliar on the plant surface, resulting in high concentrations in runoff water. The fungicide Boscalid occurred frequently and in high concentrations in runoff water in the Loechernbach catchment, a 180 ha vineyard catchment in south-west Germany, during rainfall-runoff events in 2016. The catchment is characterized by a typical terraces structure and the connection of a dense road network. The washing off from drift-depositions on the streets is expected to be a major pathway for pesticides. The main objective of this study was the provision of a catchment model to simulate the transport and transformation processes of Boscalid. Based on this model, source areas of Boscalid residue pollution and its export pathways will be identified and provide urgently needed information for the development of water pollution control strategies. The distributed, process-based, reactive transport catchment model ZIN-AgriTra was used for the evaluation of the pesticide mobilization and the export processes. The hydrological model was successfully calibrated for a 6-month high-resolution time series of discharge data. Pesticide modelling was calibrated for single rainfall events after Boscalid application. Additionally, the transformation product 4-Chlorobenzoic acid has been simulated using literature substance parameters, in order to gain information about anticipated environmental concentrations. The pathways for the discharge of Boscalid were characterized and the streets were confirmed as major pathway for the pesticide discharge in the catchment. The main Boscalid loss occured during the first flush after a storm event containing concentrations up to 10 µg/l. The results show that storage on surfaces without sorption contributes significantly to the export of pesticides through the first flush. Therefore, the mobilization process affects a combination of both sorptive (e.g. at the soil) and non-sorptive (e.g. on the surface) storages at the roads. Furthermore, measurements and simulation results show that there are background pesticide concentrations, an order of magnitude lower than the first flush concentration, for the whole simulation period. Additionally, almost half of the applied Boscalid still remains as residue in the soil at the end of the simulated 6-month period, because of slow degradation rates of Boscalid. The transformation product 4-Chlorobenzoic acid was simulated to have concentrations in the range of 0.1 µg/l. The model assumes that subsurface flow is the major loss pathway for this substance. Concluding, the introduced catchment model is an applicable tool to simulate the individual processes of the Boscalid fate in the vineyard catchment. It was confirmed that roads receiving pesticide drift are the major loss areas of Boscalid in the Loechernbach catchment.

  6. Direct splash dispersal prevails over indirect and subsequent spread during rains in Colletotrichum gloeosporioides infecting yams.

    PubMed

    Penet, Laurent; Guyader, Sébastien; Pétro, Dalila; Salles, Michèle; Bussière, François

    2014-01-01

    Plant pathogens have evolved many dispersal mechanisms, using biotic or abiotic vectors or a combination of the two. Rain splash dispersal is known from a variety of fungi, and can be an efficient driver of crop epidemics, with infectious strains propagating rapidly among often genetically homogenous neighboring plants. Splashing is nevertheless a local dispersal process and spores taking the droplet ride seldom move farther than a few decimeters. In this study, we assessed rain splash dispersal of conidia of the yam anthracnose agent, Colletotrichum gloeosporioides, in an experimental setting using a rain simulator, with emphasis on the impact of soil contamination (i.e., effect of re-splashing events). Spores dispersed up to 50 cm from yam leaf inoculum sources, though with an exponential decrease with increasing distance. While few spores were dispersed via re-splash from spore-contaminated soil, the proportion deposited via this mechanism increased with increasing distance from the initial source. We found no soil contamination carryover from previous rains, suggesting that contamination via re-splashing from contaminated soils mainly occurred within single rains. We conclude that most dispersal occurs from direct splashing, with a weaker contribution of indirect dispersal via re-splash.

  7. Direct Splash Dispersal Prevails over Indirect and Subsequent Spread during Rains in Colletotrichum gloeosporioides Infecting Yams

    PubMed Central

    Penet, Laurent; Guyader, Sébastien; Pétro, Dalila; Salles, Michèle; Bussière, François

    2014-01-01

    Plant pathogens have evolved many dispersal mechanisms, using biotic or abiotic vectors or a combination of the two. Rain splash dispersal is known from a variety of fungi, and can be an efficient driver of crop epidemics, with infectious strains propagating rapidly among often genetically homogenous neighboring plants. Splashing is nevertheless a local dispersal process and spores taking the droplet ride seldom move farther than a few decimeters. In this study, we assessed rain splash dispersal of conidia of the yam anthracnose agent, Colletotrichum gloeosporioides, in an experimental setting using a rain simulator, with emphasis on the impact of soil contamination (i.e., effect of re-splashing events). Spores dispersed up to 50 cm from yam leaf inoculum sources, though with an exponential decrease with increasing distance. While few spores were dispersed via re-splash from spore-contaminated soil, the proportion deposited via this mechanism increased with increasing distance from the initial source. We found no soil contamination carryover from previous rains, suggesting that contamination via re-splashing from contaminated soils mainly occurred within single rains. We conclude that most dispersal occurs from direct splashing, with a weaker contribution of indirect dispersal via re-splash. PMID:25532124

  8. Decoy-state quantum key distribution with more than three types of photon intensity pulses

    NASA Astrophysics Data System (ADS)

    Chau, H. F.

    2018-04-01

    The decoy-state method closes source security loopholes in quantum key distribution (QKD) using a laser source. In this method, accurate estimates of the detection rates of vacuum and single-photon events plus the error rate of single-photon events are needed to give a good enough lower bound of the secret key rate. Nonetheless, the current estimation method for these detection and error rates, which uses three types of photon intensities, is accurate up to about 1 % relative error. Here I report an experimentally feasible way that greatly improves these estimates and hence increases the one-way key rate of the BB84 QKD protocol with unbiased bases selection by at least 20% on average in realistic settings. The major tricks are the use of more than three types of photon intensities plus the fact that estimating bounds of the above detection and error rates is numerically stable, although these bounds are related to the inversion of a high condition number matrix.

  9. ATLAS Eventlndex monitoring system using the Kibana analytics and visualization platform

    NASA Astrophysics Data System (ADS)

    Barberis, D.; Cárdenas Zárate, S. E.; Favareto, A.; Fernandez Casani, A.; Gallas, E. J.; Garcia Montoro, C.; Gonzalez de la Hoz, S.; Hrivnac, J.; Malon, D.; Prokoshin, F.; Salt, J.; Sanchez, J.; Toebbicke, R.; Yuan, R.; ATLAS Collaboration

    2016-10-01

    The ATLAS EventIndex is a data catalogue system that stores event-related metadata for all (real and simulated) ATLAS events, on all processing stages. As it consists of different components that depend on other applications (such as distributed storage, and different sources of information) we need to monitor the conditions of many heterogeneous subsystems, to make sure everything is working correctly. This paper describes how we gather information about the EventIndex components and related subsystems: the Producer-Consumer architecture for data collection, health parameters from the servers that run EventIndex components, EventIndex web interface status, and the Hadoop infrastructure that stores EventIndex data. This information is collected, processed, and then displayed using CERN service monitoring software based on the Kibana analytic and visualization package, provided by CERN IT Department. EventIndex monitoring is used both by the EventIndex team and ATLAS Distributed Computing shifts crew.

  10. Long period seismic source characterization at Popocatépetl volcano, Mexico

    USGS Publications Warehouse

    Arciniega-Ceballos, Alejandra; Dawson, Phillip; Chouet, Bernard A.

    2012-01-01

    The seismicity of Popocatépetl is dominated by long-period and very-long period signals associated with hydrothermal processes and magmatic degassing. We model the source mechanism of repetitive long-period signals in the 0.4–2 s band from a 15-station broadband network by stacking long-period events with similar waveforms to improve the signal-to-noise ratio. The data are well fitted by a point source located within the summit crater ~250 m below the crater floor and ~200 m from the inferred magma conduit. The inferred source includes a volumetric component that can be modeled as resonance of a horizontal steam-filled crack and a vertical single force component. The long-period events are thought to be related to the interaction between the magmatic system and a perched hydrothermal system. Repetitive injection of fluid into the horizontal fracture and subsequent sudden discharge when a critical pressure threshold is met provides a non-destructive source process.

  11. Relative sea-level data from southwest Scotland constrain meltwater-driven sea-level jumps prior to the 8.2 kyr BP event

    NASA Astrophysics Data System (ADS)

    Lawrence, Thomas; Long, Antony J.; Gehrels, W. Roland; Jackson, Luke P.; Smith, David E.

    2016-11-01

    The most significant climate cooling of the Holocene is centred on 8.2 kyr BP (the '8.2 event'). Its cause is widely attributed to an abrupt slowdown of the Atlantic Meridional Overturning Circulation (AMOC) associated with the sudden drainage of Laurentide proglacial Lakes Agassiz and Ojibway, but model simulations have difficulty reproducing the event with a single-pulse scenario of freshwater input. Several lines of evidence point to multiple episodes of freshwater release from the decaying Laurentide Ice Sheet (LIS) between ∼8900 and ∼8200 cal yr BP, yet the precise number, timing and magnitude of these events - critical constraints for AMOC simulations - are far from resolved. Here we present a high-resolution relative sea level (RSL) record for the period 8800 to 7800 cal yr BP developed from estuarine and salt-marsh deposits in SW Scotland. We find that RSL rose abruptly in three steps by 0.35 m, 0.7 m and 0.4 m (mean) at 8760-8640, 8595-8465, 8323-8218 cal yr BP respectively. The timing of these RSL steps correlate closely with short-lived events expressed in North Atlantic proxy climate and oceanographic records, providing evidence of at least three distinct episodes of enhanced meltwater discharge from the decaying LIS prior to the 8.2 event. Our observations can be used to test the fidelity of both climate and ice-sheet models in simulating abrupt change during the early Holocene.

  12. The source mechanisms of low frequency events in volcanoes - a comparison of synthetic and real seismic data on Soufriere Hills Volcano, Montserrat

    NASA Astrophysics Data System (ADS)

    Karl, S.; Neuberg, J. W.

    2012-04-01

    Low frequency seismic signals are one class of volcano seismic earthquakes that have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements. Amongst others, Neuberg et al. (2006) proposed a conceptual model for the trigger of low frequency events at Montserrat involving the brittle failure of magma in the glass transition in response to high shear stresses during the upwards movement of magma in the volcanic edifice. For this study, synthetic seismograms were generated following the proposed concept of Neuberg et al. (2006) by using an extended source modelled as an octagonal arrangement of double couples approximating a circular ringfault. For comparison, synthetic seismograms were generated using single forces only. For both scenarios, synthetic seismograms were generated using a seismic station distribution as encountered on Soufriere Hills Volcano, Montserrat. To gain a better quantitative understanding of the driving forces of low frequency events, inversions for the physical source mechanisms have become increasingly common. Therefore, we perform moment tensor inversions (Dreger, 2003) using the synthetic data as well as a chosen set of seismograms recorded on Soufriere Hills Volcano. The inversions are carried out under the (wrong) assumption to have an underlying point source rather than an extended source as the trigger mechanism of the low frequency seismic events. We will discuss differences between inversion results, and how to interpret the moment tensor components (double couple, isotropic, or CLVD), which were based on a point source, in terms of an extended source.

  13. Development of total maximum daily loads for bacteria impaired watershed using the comprehensive hydrology and water quality simulation model.

    PubMed

    Kim, Sang M; Brannan, Kevin M; Zeckoski, Rebecca W; Benham, Brian L

    2014-01-01

    The objective of this study was to develop bacteria total maximum daily loads (TMDLs) for the Hardware River watershed in the Commonwealth of Virginia, USA. The TMDL program is an integrated watershed management approach required by the Clean Water Act. The TMDLs were developed to meet Virginia's water quality standard for bacteria at the time, which stated that the calendar-month geometric mean concentration of Escherichia coli should not exceed 126 cfu/100 mL, and that no single sample should exceed a concentration of 235 cfu/100 mL. The bacteria impairment TMDLs were developed using the Hydrological Simulation Program-FORTRAN (HSPF). The hydrology and water quality components of HSPF were calibrated and validated using data from the Hardware River watershed to ensure that the model adequately simulated runoff and bacteria concentrations. The calibrated and validated HSPF model was used to estimate the contributions from the various bacteria sources in the Hardware River watershed to the in-stream concentration. Bacteria loads were estimated through an extensive source characterization process. Simulation results for existing conditions indicated that the majority of the bacteria came from livestock and wildlife direct deposits and pervious lands. Different source reduction scenarios were evaluated to identify scenarios that meet both the geometric mean and single sample maximum E. coli criteria with zero violations. The resulting scenarios required extreme and impractical reductions from livestock and wildlife sources. Results from studies similar to this across Virginia partially contributed to a reconsideration of the standard's applicability to TMDL development.

  14. Imaging an Event Horizon: Mitigation of Source Variability of Sagittarius A*

    NASA Astrophysics Data System (ADS)

    Lu, Ru-Sen; Roelofs, Freek; Fish, Vincent L.; Shiokawa, Hotaka; Doeleman, Sheperd S.; Gammie, Charles F.; Falcke, Heino; Krichbaum, Thomas P.; Zensus, J. Anton

    2016-02-01

    The black hole in the center of the Galaxy, associated with the compact source Sagittarius A* (Sgr A*), is predicted to cast a shadow upon the emission of the surrounding plasma flow, which encodes the influence of general relativity (GR) in the strong-field regime. The Event Horizon Telescope (EHT) is a Very Long Baseline Interferometry (VLBI) network with a goal of imaging nearby supermassive black holes (in particular Sgr A* and M87) with angular resolution sufficient to observe strong gravity effects near the event horizon. General relativistic magnetohydrodynamic (GRMHD) simulations show that radio emission from Sgr A* exhibits variability on timescales of minutes, much shorter than the duration of a typical VLBI imaging experiment, which usually takes several hours. A changing source structure during the observations, however, violates one of the basic assumptions needed for aperture synthesis in radio interferometry imaging to work. By simulating realistic EHT observations of a model movie of Sgr A*, we demonstrate that an image of the average quiescent emission, featuring the characteristic black hole shadow and photon ring predicted by GR, can nonetheless be obtained by observing over multiple days and subsequent processing of the visibilities (scaling, averaging, and smoothing) before imaging. Moreover, it is shown that this procedure can be combined with an existing method to mitigate the effects of interstellar scattering. Taken together, these techniques allow the black hole shadow in the Galactic center to be recovered on the reconstructed image.

  15. The Perfect Storm of Information: Combining Traditional and Non-Traditional Data Sources for Public Health Situational Awareness During Hurricane Response

    PubMed Central

    Bennett, Kelly J.; Olsen, Jennifer M.; Harris, Sara; Mekaru, Sumiko; Livinski, Alicia A.; Brownstein, John S.

    2013-01-01

    Background: Hurricane Isaac made landfall in southeastern Louisiana in late August 2012, resulting in extensive storm surge and inland flooding. As the lead federal agency responsible for medical and public health response and recovery coordination, the Department of Health and Human Services (HHS) must have situational awareness to prepare for and address state and local requests for assistance following hurricanes. Both traditional and non-traditional data have been used to improve situational awareness in fields like disease surveillance and seismology. This study investigated whether non-traditional data (i.e., tweets and news reports) fill a void in traditional data reporting during hurricane response, as well as whether non-traditional data improve the timeliness for reporting identified HHS Essential Elements of Information (EEI). Methods: HHS EEIs provided the information collection guidance, and when the information indicated there was a potential public health threat, an event was identified and categorized within the larger scope of overall Hurricane Issac situational awareness. Tweets, news reports, press releases, and federal situation reports during Hurricane Isaac response were analyzed for information about EEIs. Data that pertained to the same EEI were linked together and given a unique event identification number to enable more detailed analysis of source content. Reports of sixteen unique events were examined for types of data sources reporting on the event and timeliness of the reports. Results: Of these sixteen unique events identified, six were reported by only a single data source, four were reported by two data sources, four were reported by three data sources, and two were reported by four or more data sources. For five of the events where news tweets were one of multiple sources of information about an event, the tweet occurred prior to the news report, press release, local government\\emergency management tweet, and federal situation report. In all circumstances where citizens were reporting along with other sources, the citizen tweet was the earliest notification of the event. Conclusion: Critical information is being shared by citizens, news organizations, and local government representatives. To have situational awareness for providing timely, life-saving public health and medical response following a hurricane, this study shows that non-traditional data sources should augment traditional data sources and can fill some of the gaps in traditional reporting. During a hurricane response where early event detection can save lives and reduce morbidity, tweets can provide a source of information for early warning. In times of limited budgets, investing technical and personnel resources to efficiently and effectively gather, curate, and analyze non-traditional data for improved situational awareness can yield a high return on investment. PMID:24459610

  16. The perfect storm of information: combining traditional and non-traditional data sources for public health situational awareness during hurricane response.

    PubMed

    Bennett, Kelly J; Olsen, Jennifer M; Harris, Sara; Mekaru, Sumiko; Livinski, Alicia A; Brownstein, John S

    2013-12-16

    Hurricane Isaac made landfall in southeastern Louisiana in late August 2012, resulting in extensive storm surge and inland flooding. As the lead federal agency responsible for medical and public health response and recovery coordination, the Department of Health and Human Services (HHS) must have situational awareness to prepare for and address state and local requests for assistance following hurricanes. Both traditional and non-traditional data have been used to improve situational awareness in fields like disease surveillance and seismology. This study investigated whether non-traditional data (i.e., tweets and news reports) fill a void in traditional data reporting during hurricane response, as well as whether non-traditional data improve the timeliness for reporting identified HHS Essential Elements of Information (EEI). HHS EEIs provided the information collection guidance, and when the information indicated there was a potential public health threat, an event was identified and categorized within the larger scope of overall Hurricane Issac situational awareness. Tweets, news reports, press releases, and federal situation reports during Hurricane Isaac response were analyzed for information about EEIs. Data that pertained to the same EEI were linked together and given a unique event identification number to enable more detailed analysis of source content. Reports of sixteen unique events were examined for types of data sources reporting on the event and timeliness of the reports. Of these sixteen unique events identified, six were reported by only a single data source, four were reported by two data sources, four were reported by three data sources, and two were reported by four or more data sources. For five of the events where news tweets were one of multiple sources of information about an event, the tweet occurred prior to the news report, press release, local government\\emergency management tweet, and federal situation report. In all circumstances where citizens were reporting along with other sources, the citizen tweet was the earliest notification of the event. Critical information is being shared by citizens, news organizations, and local government representatives. To have situational awareness for providing timely, life-saving public health and medical response following a hurricane, this study shows that non-traditional data sources should augment traditional data sources and can fill some of the gaps in traditional reporting. During a hurricane response where early event detection can save lives and reduce morbidity, tweets can provide a source of information for early warning. In times of limited budgets, investing technical and personnel resources to efficiently and effectively gather, curate, and analyze non-traditional data for improved situational awareness can yield a high return on investment.

  17. Study of atmospheric dynamics and pollution in the coastal area of English Channel using clustering technique

    NASA Astrophysics Data System (ADS)

    Sokolov, Anton; Dmitriev, Egor; Delbarre, Hervé; Augustin, Patrick; Gengembre, Cyril; Fourmenten, Marc

    2016-04-01

    The problem of atmospheric contamination by principal air pollutants was considered in the industrialized coastal region of English Channel in Dunkirk influenced by north European metropolitan areas. MESO-NH nested models were used for the simulation of the local atmospheric dynamics and the online calculation of Lagrangian backward trajectories with 15-minute temporal resolution and the horizontal resolution down to 500 m. The one-month mesoscale numerical simulation was coupled with local pollution measurements of volatile organic components, particulate matter, ozone, sulphur dioxide and nitrogen oxides. Principal atmospheric pathways were determined by clustering technique applied to backward trajectories simulated. Six clusters were obtained which describe local atmospheric dynamics, four winds blowing through the English Channel, one coming from the south, and the biggest cluster with small wind speeds. This last cluster includes mostly sea breeze events. The analysis of meteorological data and pollution measurements allows relating the principal atmospheric pathways with local air contamination events. It was shown that contamination events are mostly connected with a channelling of pollution from local sources and low-turbulent states of the local atmosphere.

  18. Experimental evidence for a new single-event upset (SEU) mode in a CMOS SRAM obtained from model verification

    NASA Technical Reports Server (NTRS)

    Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.; Lo, R. Y.

    1987-01-01

    Modeling of SEU has been done in a CMOS static RAM containing 1-micron-channel-length transistors fabricated from a p-well epilayer process using both circuit-simulation and numerical-simulation techniques. The modeling results have been experimentally verified with the aid of heavy-ion beams obtained from a three-stage tandem van de Graaff accelerator. Experimental evidence for a novel SEU mode in an ON n-channel device is presented.

  19. Association between heavy precipitation events and waterborne outbreaks in four Nordic countries, 1992-2012.

    PubMed

    Guzman Herrador, Bernardo; de Blasio, Birgitte Freiesleben; Carlander, Anneli; Ethelberg, Steen; Hygen, Hans Olav; Kuusi, Markku; Lund, Vidar; Löfdahl, Margareta; MacDonald, Emily; Martinez-Urtaza, Jaime; Nichols, Gordon; Schönning, Caroline; Sudre, Bertrand; Trönnberg, Linda; Vold, Line; Semenza, Jan C; Nygård, Karin

    2016-12-01

    We conducted a matched case-control study to examine the association between heavy precipitation events and waterborne outbreaks (WBOs) by linking epidemiological registries and meteorological data between 1992 and 2012 in four Nordic countries. Heavy precipitation events were defined by above average (exceedance) daily rainfall during the preceding weeks using local references. We performed conditional logistic regression using the four previous years as the controls. Among WBOs with known onset date (n = 89), exceedance rainfall on two or more days was associated with occurrence of outbreak, OR = 3.06 (95% CI 1.38-6.78), compared to zero exceedance days. Stratified analyses revealed a significant association with single household water supplies, ground water as source and for outbreaks occurring during spring and summer. These findings were reproduced in analyses including all WBOs with known outbreak month (n = 186). The vulnerability of single households to WBOs associated with heavy precipitation events should be communicated to homeowners and implemented into future policy planning to reduce the risk of waterborne illness.

  20. Monte Carlo simulation of the resolution volume for the SEQUOIA spectrometer

    NASA Astrophysics Data System (ADS)

    Granroth, G. E.; Hahn, S. E.

    2015-01-01

    Monte Carlo ray tracing simulations, of direct geometry spectrometers, have been particularly useful in instrument design and characterization. However, these tools can also be useful for experiment planning and analysis. To this end, the McStas Monte Carlo ray tracing model of SEQUOIA, the fine resolution fermi chopper spectrometer at the Spallation Neutron Source (SNS) of Oak Ridge National Laboratory (ORNL), has been modified to include the time of flight resolution sample and detector components. With these components, the resolution ellipsoid can be calculated for any detector pixel and energy bin of the instrument. The simulation is split in two pieces. First, the incident beamline up to the sample is simulated for 1 × 1011 neutron packets (4 days on 30 cores). This provides a virtual source for the backend that includes the resolution sample and monitor components. Next, a series of detector and energy pixels are computed in parallel. It takes on the order of 30 s to calculate a single resolution ellipsoid on a single core. Python scripts have been written to transform the ellipsoid into the space of an oriented single crystal, and to characterize the ellipsoid in various ways. Though this tool is under development as a planning tool, we have successfully used it to provide the resolution function for convolution with theoretical models. Specifically, theoretical calculations of the spin waves in YFeO3 were compared to measurements taken on SEQUOIA. Though the overall features of the spectra can be explained while neglecting resolution effects, the variation in intensity of the modes is well described once the resolution is included. As this was a single sharp mode, the simulated half intensity value of the resolution ellipsoid was used to provide the resolution width. A description of the simulation, its use, and paths forward for this technique will be discussed.

  1. Optical 3-Way Handshake (O3WHS) Protocol Simulation in OMNeT++

    DTIC Science & Technology

    2017-06-01

    PERSON Vinod K Mishra a. REPORT Unclassified b. ABSTRACT Unclassified c . THIS PAGE Unclassified 19b. TELEPHONE NUMBER (Include area code) 410...popular program called OMNeT++2 for that purpose. It is an open-source discrete event simulator tool written in C ++ language. It has been chiefly...References 1. Von Lehmen A, Doverspike R, Clapp G, Freimuth DM, Gannett J, Kolarov A, Kobrinski H, Makaya C , Mavrogiorgis E, Pastor J, Rauch M

  2. Quantum Logic with Cavity Photons From Single Atoms.

    PubMed

    Holleczek, Annemarie; Barter, Oliver; Rubenok, Allison; Dilley, Jerome; Nisbet-Jones, Peter B R; Langfahl-Klabes, Gunnar; Marshall, Graham D; Sparrow, Chris; O'Brien, Jeremy L; Poulios, Konstantinos; Kuhn, Axel; Matthews, Jonathan C F

    2016-07-08

    We demonstrate quantum logic using narrow linewidth photons that are produced with an a priori nonprobabilistic scheme from a single ^{87}Rb atom strongly coupled to a high-finesse cavity. We use a controlled-not gate integrated into a photonic chip to entangle these photons, and we observe nonclassical correlations between photon detection events separated by periods exceeding the travel time across the chip by 3 orders of magnitude. This enables quantum technology that will use the properties of both narrow-band single photon sources and integrated quantum photonics.

  3. Thermodynamic properties of a high pressure subcritical UF6/He gas volume (irradiated by an external source)

    NASA Technical Reports Server (NTRS)

    Sterritt, D. E.; Lalos, G. T.; Schneider, R. T.

    1976-01-01

    A computer simulation study concerning a compressed fissioning UF6 gas is presented. The compression is to be achieved by a ballistic piston compressor. Data on UF6 obtained with this compressor were incorporated in the simulation study. As a neutron source to create the fission events in the compressed gas, a fast burst reactor was considered. The conclusion is that it takes a neutron flux in excess of 10 to the 15th power n/sec sq cm to produce measurable increases in pressure and temperature, while a flux in excess of 10 to 19th power n/sq cm sec would probably damage the compressor.

  4. Thermodynamic properties of a high pressure subcritical UF/sub 6/He gas volume (irradiated by an external source)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sterritt, D.E.; Lalos, G.T.; Schneider, R.T.

    1976-12-01

    A computer simulation study concerning a compressed fissioning UF/sub 6/ gas is presented. The compression is to be achieved by a ballistic piston compressor. Data on UF/sub 6/ obtained with this compressor were incorporated in the simulation study. As a neutron source to create the fission events in the compressed gas, a fast burst reactor was considered. The conclusion is that it takes a neutron flux in excess of 10/sup 15/ n/cm/sup 2/-s to produce measurable increases in pressure and temperature, while a flux in excess of 10/sup 19/ n/cm/sup 2/-s would probably damage the compressor.

  5. The Shergottite Age Paradox and the Relative Probabilities of Ejecting Martian Meteorites of Differing Ages

    NASA Technical Reports Server (NTRS)

    Borg, L. E.; Shih, C.-Y.; Nyquist, L. E.

    1998-01-01

    The apparent paradox that the majority of impacts yielding Martian meteorites appear to have taken place on only a few percent of the Martian surface can be resolved if all the shergottites were ejected in a single event rather than in multiple events as expected from variations in their cosmic ray exposure and crystallization ages. If the shergottite-ejection event is assigned to one of three craters in the vicinity of Olympus Mons that were previously identified as candidate source craters for the SNC (Shergottites, Nakhlites, Chassigny) meteorites, and the nakhlite event to another candidate crater in the vicinity of Ceraunius Tholus, the implied ages of the surrounding terranes agree well with crater density ages. EN,en for high cratering rates (minimum ages), the likely origin of the shergottites is in the Tharsis region, and the paradox of too many meteorites from too little terrane remains for multiple shergottite-ejection events. However, for high cratering rates it is possible to consider sources for the nakhlltes which are away from the Tharsis region. The meteorite-yielding impacts may have been widely dispersed with sources of the young SNC meteorites in the northern plains, and the source of the ancient orthopyroxenite, ALH84001, in the ancient southern uplands. Oblique-impact craters can be identified with the sources of the nakhlites and the orthopyroxenite,, respectively, in the nominal cratering rate model, and with the shergottites and orthopyroxenite, respectively, in the high cratering rate model. Thus, oblique impacts deserve renewed attention as an ejection mechanism for Martian meteorites.

  6. Source mechanisms of a collapsing solution mine cavity

    NASA Astrophysics Data System (ADS)

    Lennart Kinscher, Jannes; Cesca, Simone; Bernard, Pascal; Contrucci, Isabelle; Mangeney, Anne; Piguet, Jack Pierre; Bigarre, Pascal

    2016-04-01

    The development and collapse of a ~200 m wide salt solution mining cavity was seismically monitored in the Lorraine basin in northeastern France. Seismic monitoring and other geophysical in situ measurements were part of a large multi-parameter research project founded by the research "group for the impact and safety of underground works" (GISOS), whose database is being integrated in the EPOS platform (European Plate Observing System). The recorded microseismic events (~ 50,000 in total) show a swarm-like behaviour, with clustering sequences lasting from seconds to days, and distinct spatiotemporal migration. The majority of swarming signals are likely related to detachment and block breakage processes, occurring at the cavity roof. Body wave amplitude patterns indicate the presence of relatively stable source mechanisms, either associated with dip-slip and/or tensile faulting. However, short inter-event times, the high frequency geophone recordings, and the limited network station coverage often limits the application of classical source analysis techniques. In order to deal with these shortcomings, we examined the source mechanisms through different procedures including modelling of observed and synthetic waveforms and amplitude spectra of some well located events, as well as modelling of peak-to-peak amplitude ratios for most of the detected events. The latter approach was used to infer the average source mechanism of many swarming events at once by using a single three component station. To our knowledge this approach is applied here for the first time and represents an useful tool for source studies of seismic swarms and seismicity clusters. The results of the different methods are consistent and show that at least 50 % of the microseismic events have remarkably stable source mechanisms, associated with similarly oriented thrust faults, striking NW-SE and dipping around 35-55°. Consistent source mechanisms are probably related to the presence of a preferential direction of pre-existing fault structures. As an interesting by-product, we demonstrate, for the first time directly on seismic data that the source radiation pattern significantly controls the detection capability of a seismic station and network.

  7. Performance Evaluation of 18F Radioluminescence Microscopy Using Computational Simulation

    PubMed Central

    Wang, Qian; Sengupta, Debanti; Kim, Tae Jin; Pratx, Guillem

    2017-01-01

    Purpose Radioluminescence microscopy can visualize the distribution of beta-emitting radiotracers in live single cells with high resolution. Here, we perform a computational simulation of 18F positron imaging using this modality to better understand how radioluminescence signals are formed and to assist in optimizing the experimental setup and image processing. Methods First, the transport of charged particles through the cell and scintillator and the resulting scintillation is modeled using the GEANT4 Monte-Carlo simulation. Then, the propagation of the scintillation light through the microscope is modeled by a convolution with a depth-dependent point-spread function, which models the microscope response. Finally, the physical measurement of the scintillation light using an electron-multiplying charge-coupled device (EMCCD) camera is modeled using a stochastic numerical photosensor model, which accounts for various sources of noise. The simulated output of the EMCCD camera is further processed using our ORBIT image reconstruction methodology to evaluate the endpoint images. Results The EMCCD camera model was validated against experimentally acquired images and the simulated noise, as measured by the standard deviation of a blank image, was found to be accurate within 2% of the actual detection. Furthermore, point-source simulations found that a reconstructed spatial resolution of 18.5 μm can be achieved near the scintillator. As the source is moved away from the scintillator, spatial resolution degrades at a rate of 3.5 μm per μm distance. These results agree well with the experimentally measured spatial resolution of 30–40 μm (live cells). The simulation also shows that the system sensitivity is 26.5%, which is also consistent with our previous experiments. Finally, an image of a simulated sparse set of single cells is visually similar to the measured cell image. Conclusions Our simulation methodology agrees with experimental measurements taken with radioluminescence microscopy. This in silico approach can be used to guide further instrumentation developments and to provide a framework for improving image reconstruction. PMID:28273348

  8. Modeling Airport Ground Operations using Discrete Event Simulation (DES) and X3D Visualization

    DTIC Science & Technology

    2008-03-01

    scenes. It is written in open-source Java and XML using the Netbeans platform, which gave the features of being suitable as standalone applications...and as a plug-in module for the Netbeans integrated development environment (IDE). X3D Graphics is the tool used for the elaboration the creation of...process is shown in Figure 2. To 20 create a new event graph in Viskit, first, Viskit tool must be launched via Netbeans or from the executable

  9. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things

    PubMed Central

    Akan, Ozgur B.

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST). PMID:29538405

  10. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.

    PubMed

    Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).

  11. Impurity mixing and radiation asymmetry in massive gas injection simulations of DIII-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izzo, V. A.

    Simulations of neon massive gas injection into DIII-D are performed with the 3D MHD code NIMROD. The poloidal and toroidal distribution of the impurity source is varied. This report will focus on the effects of the source variation on impurity mixing and radiated power asymmetry. Even toroidally symmetric impurity injection is found to produce asymmetric radiated power due to asymmetric convective heat flux produced by the 1/1 mode. When the gas source is toroidally localized, the phase relationship between the mode and the source location is important, affecting both radiation peaking and impurity mixing. Under certain circumstances, a single, localizedmore » gas jet could produce better radiation symmetry during the disruption thermal quench than evenly distributed impurities.« less

  12. A Numerical Study of the Effect of Periodic Nutrient Supply on Pathways of Carbon in a Coastal Upwelling Regime

    NASA Technical Reports Server (NTRS)

    Carr, Mary-Elena

    1998-01-01

    A size-based ecosystem model was modified to include periodic upwelling events and used to evaluate the effect of episodic nutrient supply on the standing stock, carbon uptake, and carbon flow into mesozooplankton grazing and sinking flux in a coastal upwelling regime. Two ecosystem configurations were compared: a single food chain made up of net phytoplankton and mesozooplankton (one autotroph and one heterotroph, A1H1), and three interconnected food chains plus bacteria (three autotrophs and four heterotrophs, A3H4). The carbon pathways in the A1H1 simulations were under stronger physical control than those of the A3H4 runs, where the small size classes are not affected by frequent upwelling events. In the more complex food web simulations, the microbial pathway determines the total carbon uptake and grazing rates, and regenerated nitrogen accounts for more than half of the total primary production for periods of 20 days or longer between events. By contrast, new production, export of carbon through sinking and mesozooplankton grazing are more important in the A1H1 simulations. In the A3H4 simulations, the turnover time scale of the autotroph biomass increases as the period between upwelling events increases, because of the larger contribution of slow-growing net phytoplankton. The upwelling period was characterized for three upwelling sites from the alongshore wind speed measured by the NASA Scatterometer (NSCAT) and the corresponding model output compared with literature data. This validation exercise for three upwelling sites and a downstream embayment suggests that standing stock, carbon uptake and size fractionation were best supported by the A3H4 simulations, while the simulated sinking fluxes are not distinguishable in the two configurations.

  13. Data quality of seismic records from the Tohoku, Japan earthquake as recorded across the Albuquerque Seismological Laboratory networks

    USGS Publications Warehouse

    Ringler, A.T.; Gee, L.S.; Marshall, B.; Hutt, C.R.; Storm, T.

    2012-01-01

    Great earthquakes recorded across modern digital seismographic networks, such as the recent Tohoku, Japan, earthquake on 11 March 2011 (Mw = 9.0), provide unique datasets that ultimately lead to a better understanding of the Earth's structure (e.g., Pesicek et al. 2008) and earthquake sources (e.g., Ammon et al. 2011). For network operators, such events provide the opportunity to look at the performance across their entire network using a single event, as the ground motion records from the event will be well above every station's noise floor.

  14. Performance of the SWEEP model affected by estimates of threshold friction velocity

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) is a process-based model and needs to be verified under a broad range of climatic, soil, and management conditions. Occasional failure of the WEPS erosion submodel (Single-event Wind Erosion Evaluation Program or SWEEP) to simulate erosion in the Columbia Pl...

  15. Radiation-Induced Transient Effects in Near Infrared Focal Plane Arrays

    NASA Technical Reports Server (NTRS)

    Reed, Robert A.; Pickel, J.; Marshall, P.; Waczynski, A.; McMurray, R.; Gee, G.; Polidan, E.; Johnson, S.; McKeivey, M.; Ennico, K.; hide

    2004-01-01

    This viewgraph presentation describes a test simulate the transient effects of cosmic ray impacts on near infrared focal plane arrays. The objectives of the test are to: 1) Characterize proton single events as function of energy and angle of incidence; 2) Measure charge spread (crosstalk) to adjacent pixels; 3) Assess transient recovery time.

  16. ADP/ATP mitochondrial carrier MD simulations to shed light on the structural-dynamical events that, after an additional mutation, restore the function in a pathological single mutant.

    PubMed

    Di Marino, Daniele; Oteri, Francesco; Morozzo Della Rocca, Blasco; Chillemi, Giovanni; Falconi, Mattia

    2010-12-01

    Molecular dynamics simulations of the wild type bovine ADP/ATP mitochondrial carrier, and of the single Ala113Pro and double Ala113Pro/Val180Met mutants, embedded in a lipid bilayer, have been carried out for 30ns to shed light on the structural-dynamical changes induced by the Val180Met mutation restoring the carrier function in the Ala113Pro pathologic mutant. Principal component analysis indicates that, for the three systems, the protein dynamics is mainly characterized by the motion of the matrix loops and of the odd-numbered helices having a conserved proline in their central region. Analysis of the motions shows a different behaviour of single pathological mutant with respect of the other two systems. The single mutation induces a regularization and rigidity of the H3 helix, lost upon the introduction of the second mutation. This is directly correlated to the salt bridge distribution involving residues Arg79, Asp134 and Arg234, hypothesized to interact with the substrate. In fact, in the wild type simulation two stable inter-helices salt bridges, crucial for substrate binding, are present almost over all the simulation time. In line with the impaired ADP transport, one salt interaction is lost in the single mutant trajectory but reappears in the double mutant simulation, where a salt bridge network matching the wild type is restored. Other important structural-dynamical properties, such as the trans-membrane helices mobility, analyzed via the principal component analysis, are similar for the wild type and double mutant while are different for the single mutant, providing a mechanistic explanation for their different functional properties. Copyright © 2010 Elsevier Inc. All rights reserved.

  17. Long-timescale motions in glycerol-monopalmitate lipid bilayers investigated using molecular dynamics simulation.

    PubMed

    Laner, Monika; Horta, Bruno A C; Hünenberger, Philippe H

    2015-02-01

    The occurrence of long-timescale motions in glycerol-1-monopalmitate (GMP) lipid bilayers is investigated based on previously reported 600 ns molecular dynamics simulations of a 2×8×8 GMP bilayer patch in the temperature range 302-338 K, performed at three different hydration levels, or in the presence of the cosolutes methanol or trehalose at three different concentrations. The types of long-timescale motions considered are: (i) the possible phase transitions; (ii) the precession of the relative collective tilt-angle of the two leaflets in the gel phase; (iii) the trans-gauche isomerization of the dihedral angles within the lipid aliphatic tails; and (iv) the flipping of single lipids across the two leaflets. The results provide a picture of GMP bilayers involving a rich spectrum of events occurring on a wide range of timescales, from the 100-ps range isomerization of single dihedral angles, via the 100-ns range of tilt precession motions, to the multi-μs range of phase transitions and lipid-flipping events. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Electron-induced single event upsets in 28 nm and 45 nm bulk SRAMs

    DOE PAGES

    Trippe, J. M.; Reed, R. A.; Austin, R. A.; ...

    2015-12-01

    In this study, we present experimental evidence of single electron-induced upsets in commercial 28 nm and 45 nm CMOS SRAMs from a monoenergetic electron beam. Upsets were observed in both technology nodes when the SRAM was operated in a low power state. The experimental cross section depends strongly on both bias and technology node feature size, consistent with previous work in which SRAMs were irradiated with low energy muons and protons. Accompanying simulations demonstrate that δ-rays produced by the primary electrons are responsible for the observed upsets. Additional simulations predict the on-orbit event rates for various Earth and Jovian environmentsmore » for a set of sensitive volumes representative of current technology nodes. The electron contribution to the total upset rate for Earth environments is significant for critical charges as high as 0.2 fC. This value is comparable to that of sub-22 nm bulk SRAMs. Similarly, for the Jovian environment, the electron-induced upset rate is larger than the proton-induced upset rate for critical charges as high as 0.3 fC.« less

  19. Simulations indicate that scores of lionfish (Pterois volitans) colonized the Atlantic Ocean.

    PubMed

    Selwyn, Jason D; Johnson, John E; Downey-Wall, Alan M; Bynum, Adam M; Hamner, Rebecca M; Hogan, J Derek; Bird, Christopher E

    2017-01-01

    The invasion of the western Atlantic Ocean by the Indo-Pacific red lionfish ( Pterois volitans ) has had devastating consequences for marine ecosystems. Estimating the number of colonizing lionfish can be useful in identifying the introduction pathway and can inform policy decisions aimed at preventing similar invasions. It is well-established that at least ten lionfish were initially introduced. However, that estimate has not faced probabilistic scrutiny and is based solely on the number of haplotypes in the maternally-inherited mitochondrial control region. To rigorously estimate the number of lionfish that were introduced, we used a forward-time, Wright-Fisher, population genetic model in concert with a demographic, life-history model to simulate the invasion across a range of source population sizes and colonizing population fecundities. Assuming a balanced sex ratio and no Allee effects, the simulations indicate that the Atlantic population was founded by 118 (54-514, 95% HPD) lionfish from the Indo-Pacific, the Caribbean by 84 (22-328, 95% HPD) lionfish from the Atlantic, and the Gulf of Mexico by at least 114 (no upper bound on 95% HPD) lionfish from the Caribbean. Increasing the size, and therefore diversity, of the Indo-Pacific source population and fecundity of the founding population caused the number of colonists to decrease, but with rapidly diminishing returns. When the simulation was parameterized to minimize the number of colonists (high θ and relative fecundity), 96 (48-216, 95% HPD) colonists were most likely. In a more realistic scenario with Allee effects (e.g., 50% reduction in fecundity) plaguing the colonists, the most likely number of lionfish increased to 272 (106-950, 95% HPD). These results, in combination with other published data, support the hypothesis that lionfish were introduced to the Atlantic via the aquarium trade, rather than shipping. When building the model employed here, we made assumptions that minimize the number of colonists, such as the lionfish being introduced in a single event. While we conservatively modelled the introduction pathway as a single release of lionfish in one location, it is more likely that a combination of smaller and larger releases from a variety of aquarium trade stakeholders occurred near Miami, Florida, which could have led to even larger numbers of colonists than simulated here. Efforts to prevent future invasions via the aquarium trade should focus on the education of stakeholders and the prohibition of release, with adequate rewards for compliance and penalties for violations.

  20. Simulations indicate that scores of lionfish (Pterois volitans) colonized the Atlantic Ocean

    PubMed Central

    Selwyn, Jason D.; Johnson, John E.; Downey-Wall, Alan M.; Bynum, Adam M.; Hamner, Rebecca M.; Hogan, J. Derek

    2017-01-01

    The invasion of the western Atlantic Ocean by the Indo-Pacific red lionfish (Pterois volitans) has had devastating consequences for marine ecosystems. Estimating the number of colonizing lionfish can be useful in identifying the introduction pathway and can inform policy decisions aimed at preventing similar invasions. It is well-established that at least ten lionfish were initially introduced. However, that estimate has not faced probabilistic scrutiny and is based solely on the number of haplotypes in the maternally-inherited mitochondrial control region. To rigorously estimate the number of lionfish that were introduced, we used a forward-time, Wright-Fisher, population genetic model in concert with a demographic, life-history model to simulate the invasion across a range of source population sizes and colonizing population fecundities. Assuming a balanced sex ratio and no Allee effects, the simulations indicate that the Atlantic population was founded by 118 (54–514, 95% HPD) lionfish from the Indo-Pacific, the Caribbean by 84 (22–328, 95% HPD) lionfish from the Atlantic, and the Gulf of Mexico by at least 114 (no upper bound on 95% HPD) lionfish from the Caribbean. Increasing the size, and therefore diversity, of the Indo-Pacific source population and fecundity of the founding population caused the number of colonists to decrease, but with rapidly diminishing returns. When the simulation was parameterized to minimize the number of colonists (high θ and relative fecundity), 96 (48–216, 95% HPD) colonists were most likely. In a more realistic scenario with Allee effects (e.g., 50% reduction in fecundity) plaguing the colonists, the most likely number of lionfish increased to 272 (106–950, 95% HPD). These results, in combination with other published data, support the hypothesis that lionfish were introduced to the Atlantic via the aquarium trade, rather than shipping. When building the model employed here, we made assumptions that minimize the number of colonists, such as the lionfish being introduced in a single event. While we conservatively modelled the introduction pathway as a single release of lionfish in one location, it is more likely that a combination of smaller and larger releases from a variety of aquarium trade stakeholders occurred near Miami, Florida, which could have led to even larger numbers of colonists than simulated here. Efforts to prevent future invasions via the aquarium trade should focus on the education of stakeholders and the prohibition of release, with adequate rewards for compliance and penalties for violations. PMID:29302383

  1. Testing the seismology-based landquake monitoring system

    NASA Astrophysics Data System (ADS)

    Chao, Wei-An

    2016-04-01

    I have developed a real-time landquake monitoring system (RLMs), which monitor large-scale landquake activities in the Taiwan using real-time seismic network of Broadband Array in Taiwan for Seismology (BATS). The RLM system applies a grid-based general source inversion (GSI) technique to obtain the preliminary source location and force mechanism. A 2-D virtual source-grid on the Taiwan Island is created with an interval of 0.2° in both latitude and longitude. The depth of each grid point is fixed on the free surface topography. A database is stored on the hard disk for the synthetics, which are obtained using Green's functions computed by the propagator matrix approach for 1-D average velocity model, at all stations from each virtual source-grid due to nine elementary source components: six elementary moment tensors and three orthogonal (north, east and vertical) single-forces. Offline RLM system was carried out for events detected in previous studies. An important aspect of the RLM system is the implementation of GSI approach for different source types (e.g., full moment tensor, double couple faulting, and explosion source) by the grid search through the 2-D virtual source to automatically identify landquake event based on the improvement in waveform fitness and evaluate the best-fit solution in the monitoring area. With this approach, not only the force mechanisms but also the event occurrence time and location can be obtained simultaneously about 6-8 min after an occurrence of an event. To improve the insufficient accuracy of GSI-determined lotion, I further conduct a landquake epicenter determination (LED) method that maximizes the coherency of the high-frequency (1-3 Hz) horizontal envelope functions to determine the final source location. With good knowledge about the source location, I perform landquake force history (LFH) inversion to investigate the source dynamics (e.g., trajectory) for the relatively large-sized landquake event. With providing aforementioned source information in real-time, the government and emergency response agencies have sufficient reaction time for rapid assessment and response to landquake hazards. Since 2016, the RLM system has operated online.

  2. Estimation and Correction of bias of long-term simulated climate data from Global Circulation Models (GCMs)

    NASA Astrophysics Data System (ADS)

    Mehan, S.; Gitau, M. W.

    2017-12-01

    Global circulation models are often used in simulating long-term climate data for use in hydrologic studies. However, some bias (difference between simulated values and observed data) has been observed especially while simulating precipitation events. The bias is especially evident with respect to simulating dry and wet days. This is because GCMs tend to underestimate large precipitation events with the associated precipitation amounts being distributed to some dry days, thus, leading to a larger number of wet days each with some amount of rainfall. The accuracy of precipitation simulations impacts the accuracy of other simulated components such as flow and water quality. It is, thus, very important to correct the bias associated with precipitation before it is used for any modeling applications. This study aims to correct the bias specifically associated with precipitation events with a focus on the Western Lake Erie Basin (WLEB). Analytical, statistical, and extreme event analyses for three different stations (Adrian, MI; Norwalk, OH; and Fort Wayne, IN) in the WLEB were carried out to quantify the bias. Findings indicated that GCMs overestimated the wet sequences and underestimated dry day probabilities. The number of wet sequences simulated by nine GCMs each from two different open sources were 310-678 (Fort Wayne, IN); 318-600 (Adrian, MI); and 346-638 (Norwalk, OH) compared with 166, 150, and 180, respectively. Predicted conditional probabilities of a dry day followed by wet day (P (D|W)) ranged between 0.16-0.42 (Fort Wayne, IN); 0.29-0.41(Adrian, MI); and 0.13-0.40 (Norwalk, OH) from the different GCMs compared to 0.52 (Fort Wayne, IN and Norwalk, OH); and 0.54 (Adrian, MI) from the observed climate data. There was a difference of 0-8.5% between the distribution of simulated climate values and observed climate data for precipitation and temperature for all three stations (Cohen's d effective size < 0.2). Further work involves the use of Stochastic Weather Generators to correct the conditional probabilities and better capture the dry and wet events for use in the hydrologic and water resources modeling.

  3. Fast radio burst event rate counts - I. Interpreting the observations

    NASA Astrophysics Data System (ADS)

    Macquart, J.-P.; Ekers, R. D.

    2018-02-01

    The fluence distribution of the fast radio burst (FRB) population (the `source count' distribution, N (>F) ∝Fα), is a crucial diagnostic of its distance distribution, and hence the progenitor evolutionary history. We critically reanalyse current estimates of the FRB source count distribution. We demonstrate that the Lorimer burst (FRB 010724) is subject to discovery bias, and should be excluded from all statistical studies of the population. We re-examine the evidence for flat, α > -1, source count estimates based on the ratio of single-beam to multiple-beam detections with the Parkes multibeam receiver, and show that current data imply only a very weak constraint of α ≲ -1.3. A maximum-likelihood analysis applied to the portion of the Parkes FRB population detected above the observational completeness fluence of 2 Jy ms yields α = -2.6_{-1.3}^{+0.7 }. Uncertainties in the location of each FRB within the Parkes beam render estimates of the Parkes event rate uncertain in both normalizing survey area and the estimated post-beam-corrected completeness fluence; this uncertainty needs to be accounted for when comparing the event rate against event rates measured at other telescopes.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, Elizabeth J.; Yu, Sungduk; Kooperman, Gabriel J.

    The sensitivities of simulated mesoscale convective systems (MCSs) in the central U.S. to microphysics and grid configuration are evaluated here in a global climate model (GCM) that also permits global-scale feedbacks and variability. Since conventional GCMs do not simulate MCSs, studying their sensitivities in a global framework useful for climate change simulations has not previously been possible. To date, MCS sensitivity experiments have relied on controlled cloud resolving model (CRM) studies with limited domains, which avoid internal variability and neglect feedbacks between local convection and larger-scale dynamics. However, recent work with superparameterized (SP) GCMs has shown that eastward propagating MCS-likemore » events are captured when embedded CRMs replace convective parameterizations. This study uses a SP version of the Community Atmosphere Model version 5 (SP-CAM5) to evaluate MCS sensitivities, applying an objective empirical orthogonal function algorithm to identify MCS-like events, and harmonizing composite storms to account for seasonal and spatial heterogeneity. A five-summer control simulation is used to assess the magnitude of internal and interannual variability relative to 10 sensitivity experiments with varied CRM parameters, including ice fall speed, one-moment and two-moment microphysics, and grid spacing. MCS sensitivities were found to be subtle with respect to internal variability, and indicate that ensembles of over 100 storms may be necessary to detect robust differences in SP-GCMs. Furthermore, these results emphasize that the properties of MCSs can vary widely across individual events, and improving their representation in global simulations with significant internal variability may require comparison to long (multidecadal) time series of observed events rather than single season field campaigns.« less

  5. Near-Fault Broadband Ground Motion Simulations Using Empirical Green's Functions: Application to the Upper Rhine Graben (France-Germany) Case Study

    NASA Astrophysics Data System (ADS)

    Del Gaudio, Sergio; Hok, Sebastien; Festa, Gaetano; Causse, Mathieu; Lancieri, Maria

    2017-09-01

    Seismic hazard estimation relies classically on data-based ground motion prediction equations (GMPEs) giving the expected motion level as a function of several parameters characterizing the source and the sites of interest. However, records of moderate to large earthquakes at short distances from the faults are still rare. For this reason, it is difficult to obtain a reliable ground motion prediction for such a class of events and distances where also the largest amount of damage is usually observed. A possible strategy to fill this lack of information is to generate synthetic accelerograms based on an accurate modeling of both extended fault rupture and wave propagation process. The development of such modeling strategies is essential for estimating seismic hazard close to faults in moderate seismic activity zones, where data are even scarcer. For that reason, we selected a target site in Upper Rhine Graben (URG), at the French-German border. URG is a region where faults producing micro-seismic activity are very close to the sites of interest (e.g., critical infrastructures like supply lines, nuclear power plants, etc.) needing a careful investigation of seismic hazard. In this work, we demonstrate the feasibility of performing near-fault broadband ground motion numerical simulations in a moderate seismic activity region such as URG and discuss some of the challenges related to such an application. The modeling strategy is to couple the multi-empirical Green's function technique (multi-EGFt) with a k -2 kinematic source model. One of the advantages of the multi-EGFt is that it does not require a detailed knowledge of the propagation medium since the records of small events are used as the medium transfer function, if, at the target site, records of small earthquakes located on the target fault are available. The selection of suitable events to be used as multi-EGF is detailed and discussed in our specific situation where less number of events are available. We then showed the impact that each source parameter characterizing the k-2 model has on ground motion amplitude. Finally we performed ground motion simulations showing results for different probable earthquake scenarios in the URG. Dependency of ground motions and of their variability are analyzed at different frequencies in respect of rupture velocity, roughness degree of slip distribution (stress drop), and hypocenter location. In near-source conditions, ground motion variability is shown to be mostly governed by the uncertainty on source parameters. In our specific configuration (magnitude, distance), the directivity effect is only observed in a limited frequency range. Rather, broadband ground motions are shown to be sensitive to both average rupture velocity and its possible variability, and to slip roughness. Ending up with a comparison of simulation results and GMPEs, we conclude that source parameters and their variability should be set up carefully to obtain reliable broadband ground motion estimations. In particular, our study shows that slip roughness should be set up in respect of the target stress drop. This entails the need for a better understanding of the physics of earthquake source and its incorporation in the ground motion modeling.

  6. Simulating Sources of Superstorm Plasmas

    NASA Technical Reports Server (NTRS)

    Fok, Mei-Ching

    2008-01-01

    We evaluated the contributions to magnetospheric pressure (ring current) of the solar wind, polar wind, auroral wind, and plasmaspheric wind, with the surprising result that the main phase pressure is dominated by plasmaspheric protons. We used global simulation fields from the LFM single fluid ideal MHD model. We embedded the Comprehensive Ring Current Model within it, driven by the LFM transpolar potential, and supplied with plasmas at its boundary including solar wind protons, polar wind protons, auroral wind O+, and plasmaspheric protons. We included auroral outflows and acceleration driven by the LFM ionospheric boundary condition, including parallel ion acceleration driven by upward currents. Our plasmasphere model runs within the CRCM and is driven by it. Ionospheric sources were treated using our Global Ion Kinetics code based on full equations of motion. This treatment neglects inertial loading and pressure exerted by the ionospheric plasmas, and will be superceded by multifluid simulations that include those effects. However, these simulations provide new insights into the respective role of ionospheric sources in storm-time magnetospheric dynamics.

  7. Application of Seismic Array Processing to Tsunami Early Warning

    NASA Astrophysics Data System (ADS)

    An, C.; Meng, L.

    2015-12-01

    Tsunami wave predictions of the current tsunami warning systems rely on accurate earthquake source inversions of wave height data. They are of limited effectiveness for the near-field areas since the tsunami waves arrive before data are collected. Recent seismic and tsunami disasters have revealed the need for early warning to protect near-source coastal populations. In this work we developed the basis for a tsunami warning system based on rapid earthquake source characterisation through regional seismic array back-projections. We explored rapid earthquake source imaging using onshore dense seismic arrays located at regional distances on the order of 1000 km, which provides faster source images than conventional teleseismic back-projections. We implement this method in a simulated real-time environment, and analysed the 2011 Tohoku earthquake rupture with two clusters of Hi-net stations in Kyushu and Northern Hokkaido, and the 2014 Iquique event with the Earthscope USArray Transportable Array. The results yield reasonable estimates of rupture area, which is approximated by an ellipse and leads to the construction of simple slip models based on empirical scaling of the rupture area, seismic moment and average slip. The slip model is then used as the input of the tsunami simulation package COMCOT to predict the tsunami waves. In the example of the Tohoku event, the earthquake source model can be acquired within 6 minutes from the start of rupture and the simulation of tsunami waves takes less than 2 min, which could facilitate a timely tsunami warning. The predicted arrival time and wave amplitude reasonably fit observations. Based on this method, we propose to develop an automatic warning mechanism that provides rapid near-field warning for areas of high tsunami risk. The initial focus will be Japan, Pacific Northwest and Alaska, where dense seismic networks with the capability of real-time data telemetry and open data accessibility, such as the Japanese HiNet (>800 instruments) and the Earthscope USArray Transportable Array (~400 instruments), are established.

  8. Tsunami simulation using submarine displacement calculated from simulation of ground motion due to seismic source model

    NASA Astrophysics Data System (ADS)

    Akiyama, S.; Kawaji, K.; Fujihara, S.

    2013-12-01

    Since fault fracturing due to an earthquake can simultaneously cause ground motion and tsunami, it is appropriate to evaluate the ground motion and the tsunami by single fault model. However, several source models are used independently in the ground motion simulation or the tsunami simulation, because of difficulty in evaluating both phenomena simultaneously. Many source models for the 2011 off the Pacific coast of Tohoku Earthquake are proposed from the inversion analyses of seismic observations or from those of tsunami observations. Most of these models show the similar features, which large amount of slip is located at the shallower part of fault area near the Japan Trench. This indicates that the ground motion and the tsunami can be evaluated by the single source model. Therefore, we examine the possibility of the tsunami prediction, using the fault model estimated from seismic observation records. In this study, we try to carry out the tsunami simulation using the displacement field of oceanic crustal movements, which is calculated from the ground motion simulation of the 2011 off the Pacific coast of Tohoku Earthquake. We use two fault models by Yoshida et al. (2011), which are based on both the teleseismic body wave and on the strong ground motion records. Although there is the common feature in those fault models, the amount of slip near the Japan trench is lager in the fault model from the strong ground motion records than in that from the teleseismic body wave. First, the large-scale ground motion simulations applying those fault models used by the voxel type finite element method are performed for the whole eastern Japan. The synthetic waveforms computed from the simulations are generally consistent with the observation records of K-NET (Kinoshita (1998)) and KiK-net stations (Aoi et al. (2000)), deployed by the National Research Institute for Earth Science and Disaster Prevention (NIED). Next, the tsunami simulations are performed by the finite difference calculation based on the shallow water theory. The initial wave height for tsunami generation is estimated from the vertical displacement of ocean bottom due to the crustal movements, which is obtained from the ground motion simulation mentioned above. The results of tsunami simulations are compared with the observations of the GPS wave gauges to evaluate the validity for the tsunami prediction using the fault model based on the seismic observation records.

  9. Short-Period Surface Wave Based Seismic Event Relocation

    NASA Astrophysics Data System (ADS)

    White-Gaynor, A.; Cleveland, M.; Nyblade, A.; Kintner, J. A.; Homman, K.; Ammon, C. J.

    2017-12-01

    Accurate and precise seismic event locations are essential for a broad range of geophysical investigations. Superior location accuracy generally requires calibration with ground truth information, but superb relative location precision is often achievable independently. In explosion seismology, low-yield explosion monitoring relies on near-source observations, which results in a limited number of observations that challenges our ability to estimate any locations. Incorporating more distant observations means relying on data with lower signal-to-noise ratios. For small, shallow events, the short-period (roughly 1/2 to 8 s period) fundamental-mode and higher-mode Rayleigh waves (including Rg) are often the most stable and visible portion of the waveform at local distances. Cleveland and Ammon [2013] have shown that teleseismic surface waves are valuable observations for constructing precise, relative event relocations. We extend the teleseismic surface wave relocation method, and apply them to near-source distances using Rg observations from the Bighorn Arche Seismic Experiment (BASE) and the Earth Scope USArray Transportable Array (TA) seismic stations. Specifically, we present relocation results using short-period fundamental- and higher-mode Rayleigh waves (Rg) in a double-difference relative event relocation for 45 delay-fired mine blasts and 21 borehole chemical explosions. Our preliminary efforts are to explore the sensitivity of the short-period surface waves to local geologic structure, source depth, explosion magnitude (yield), and explosion characteristics (single-shot vs. distributed source, etc.). Our results show that Rg and the first few higher-mode Rayleigh wave observations can be used to constrain the relative locations of shallow low-yield events.

  10. Python Open source Waveform ExtractoR (POWER): an open source, Python package to monitor and post-process numerical relativity simulations

    NASA Astrophysics Data System (ADS)

    Johnson, Daniel; Huerta, E. A.; Haas, Roland

    2018-01-01

    Numerical simulations of Einstein’s field equations provide unique insights into the physics of compact objects moving at relativistic speeds, and which are driven by strong gravitational interactions. Numerical relativity has played a key role to firmly establish gravitational wave astrophysics as a new field of research, and it is now paving the way to establish whether gravitational wave radiation emitted from compact binary mergers is accompanied by electromagnetic and astro-particle counterparts. As numerical relativity continues to blend in with routine gravitational wave data analyses to validate the discovery of gravitational wave events, it is essential to develop open source tools to streamline these studies. Motivated by our own experience as users and developers of the open source, community software, the Einstein Toolkit, we present an open source, Python package that is ideally suited to monitor and post-process the data products of numerical relativity simulations, and compute the gravitational wave strain at future null infinity in high performance environments. We showcase the application of this new package to post-process a large numerical relativity catalog and extract higher-order waveform modes from numerical relativity simulations of eccentric binary black hole mergers and neutron star mergers. This new software fills a critical void in the arsenal of tools provided by the Einstein Toolkit consortium to the numerical relativity community.

  11. SESAME: a software tool for the numerical dosimetric reconstruction of radiological accidents involving external sources and its application to the accident in Chile in December 2005.

    PubMed

    Huet, C; Lemosquet, A; Clairand, I; Rioual, J B; Franck, D; de Carlan, L; Aubineau-Lanièce, I; Bottollier-Depois, J F

    2009-01-01

    Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. This dose distribution can be assessed by physical dosimetric reconstruction methods. Physical dosimetric reconstruction can be achieved using experimental or numerical techniques. This article presents the laboratory-developed SESAME--Simulation of External Source Accident with MEdical images--tool specific to dosimetric reconstruction of radiological accidents through numerical simulations which combine voxel geometry and the radiation-material interaction MCNP(X) Monte Carlo computer code. The experimental validation of the tool using a photon field and its application to a radiological accident in Chile in December 2005 are also described.

  12. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects

    PubMed Central

    Lambers, Martin; Kolb, Andreas

    2017-01-01

    In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data. PMID:29271888

  13. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects.

    PubMed

    Bulczak, David; Lambers, Martin; Kolb, Andreas

    2017-12-22

    In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.

  14. A Single Hot Event That Does Not Affect Survival but Decreases Reproduction in the Diamondback Moth, Plutella xylostella

    PubMed Central

    Zhang, Wei; Zhao, Fei; Hoffmann, Ary A.; Ma, Chun-Sen

    2013-01-01

    Extremely hot events (usually involving a few hours at extreme high temperatures in summer) are expected to increase in frequency in temperate regions under global warming. The impact of these events is generally overlooked in insect population prediction, since they are unlikely to cause widespread mortality, however reproduction may be affected by them. In this study, we examined such stress effects in the diamondback moth, Plutella xylostella. We simulated a single extreme hot day (maximum of 40°C lasting for 3, 4 or 5 h) increasingly experienced under field conditions. This event had no detrimental effects on immediate mortality, copulation duration, mating success, longevity or lifetime fecundity, but stressed females produced 21% (after 3 or 4 h) fewer hatched eggs because of a decline in the number and hatching success of eggs laid on the first two days. These negative effects on reproduction were no longer evident in the following days. Male heat exposure led to a similar but smaller effect on fertile egg production, and exposure extended pre-mating period in both sexes. Our results indicate that a single hot day can have detrimental effects on reproduction, particularly through maternal effects on egg hatching, and thereby influence the population dynamics of diamondback moth. PMID:24116081

  15. Simulation of Runoff Changes Caused by Cropland to Forest Conversion in the Upper Yangtze River Region, SW China

    PubMed Central

    Yu, Pengtao; Wang, Yanhui; Coles, Neil; Xiong, Wei; Xu, Lihong

    2015-01-01

    The "Grain for Green Project" is a country-wide ecological program to converse marginal cropland to forest, which has been implemented in China since 2002. To quantify influence of this significant vegetation change, Guansihe Hydrological (GSH) Model, a validated physically-based distributed hydrological model, was applied to simulate runoff responses to land use change in the Guansihe watershed that is located in the upper reaches of the Yangtze River basin in Southwestern China with an area of only 21.1 km2. Runoff responses to two single rainfall events, 90 mm and 206 mm respectively, were simulated for 16 scenarios of cropland to forest conversion. The model simulations indicated that the total runoff generated after conversion to forest was strongly dependent on whether the land was initially used for dry croplands without standing water in fields or constructed (or walled) paddy fields. The simulated total runoff generated from the two rainfall events displayed limited variation for the conversion of dry croplands to forest, while it strongly decreased after paddy fields were converted to forest. The effect of paddy terraces on runoff generation was dependent on the rainfall characteristics and antecedent moisture (or saturation) conditions in the fields. The reduction in simulated runoff generated from intense rainfall events suggested that afforestation and terracing might be effective in managing runoff and had the potential to mitigate flooding in southwestern China. PMID:26192181

  16. Tsunami Source Identification on the 1867 Tsunami Event Based on the Impact Intensity

    NASA Astrophysics Data System (ADS)

    Wu, T. R.

    2014-12-01

    The 1867 Keelung tsunami event has drawn significant attention from people in Taiwan. Not only because the location was very close to the 3 nuclear power plants which are only about 20km away from the Taipei city but also because of the ambiguous on the tsunami sources. This event is unique in terms of many aspects. First, it was documented on many literatures with many languages and with similar descriptions. Second, the tsunami deposit was discovered recently. Based on the literatures, earthquake, 7-meter tsunami height, volcanic smoke, and oceanic smoke were observed. Previous studies concluded that this tsunami was generated by an earthquake with a magnitude around Mw7.0 along the Shanchiao Fault. However, numerical results showed that even a Mw 8.0 earthquake was not able to generate a 7-meter tsunami. Considering the steep bathymetry and intense volcanic activities along the Keelung coast, one reasonable hypothesis is that different types of tsunami sources were existed, such as the submarine landslide or volcanic eruption. In order to confirm this scenario, last year we proposed the Tsunami Reverse Tracing Method (TRTM) to find the possible locations of the tsunami sources. This method helped us ruling out the impossible far-field tsunami sources. However, the near-field sources are still remain unclear. This year, we further developed a new method named 'Impact Intensity Analysis' (IIA). In the IIA method, the study area is divided into a sequence of tsunami sources, and the numerical simulations of each source is conducted by COMCOT (Cornell Multi-grid Coupled Tsunami Model) tsunami model. After that, the resulting wave height from each source to the study site is collected and plotted. This method successfully helped us to identify the impact factor from the near-field potential sources. The IIA result (Fig. 1) shows that the 1867 tsunami event was a multi-source event. A mild tsunami was trigged by a Mw7.0 earthquake, and then followed by the submarine landslide or volcanic events. A near-field submarine landslide and landslide at Mien-Hwa Canyon were the most possible scenarios. As for the volcano scenarios, the volcanic eruption located about 10 km away from Keelung with 2.5x108 m3 disturbed water volume might be a candidate. The detailed scenario results will be presented in the full paper.

  17. Realizing the measure-device-independent quantum-key-distribution with passive heralded-single photon sources

    PubMed Central

    Wang, Qin; Zhou, Xing-Yu; Guo, Guang-Can

    2016-01-01

    In this paper, we put forward a new approach towards realizing measurement-device-independent quantum key distribution with passive heralded single-photon sources. In this approach, both Alice and Bob prepare the parametric down-conversion source, where the heralding photons are labeled according to different types of clicks from the local detectors, and the heralded ones can correspondingly be marked with different tags at the receiver’s side. Then one can obtain four sets of data through using only one-intensity of pump light by observing different kinds of clicks of local detectors. By employing the newest formulae to do parameter estimation, we could achieve very precise prediction for the two-single-photon pulse contribution. Furthermore, by carrying out corresponding numerical simulations, we compare the new method with other practical schemes of measurement-device-independent quantum key distribution. We demonstrate that our new proposed passive scheme can exhibit remarkable improvement over the conventional three-intensity decoy-state measurement-device-independent quantum key distribution with either heralded single-photon sources or weak coherent sources. Besides, it does not need intensity modulation and can thus diminish source-error defects existing in several other active decoy-state methods. Therefore, if taking intensity modulating errors into account, our new method will show even more brilliant performance. PMID:27759085

  18. Coal Mining Induced Seismicity in the Ruhr Area, Germany

    NASA Astrophysics Data System (ADS)

    Bischoff, Monika; Cete, Alpan; Fritschen, Ralf; Meier, Thomas

    2010-02-01

    Over the last 25 years mining-induced seismicity in the Ruhr area has continuously been monitored by the Ruhr-University Bochum. About 1,000 seismic events with local magnitudes between 0.7 ≤ M L ≤ 3.3 are located every year. For example, 1,336 events were located in 2006. General characteristics of induced seismicity in the entire Ruhr area are spatial and temporal correlation with mining activity and a nearly constant energy release per unit time. This suggests that induced stresses are released rapidly by many small events. The magnitude-frequency distribution follows a Gutenberg-Richter relation which is a result from combining distributions of single longwalls that themselves show large variability. A high b-value of about 2 was found indicating a lack of large magnitude events. Local analyses of single longwalls indicate that various factors such as local geology and mine layout lead to significant differences in seismicity. Stress redistribution acts very locally since differences on a small scale of some hundreds of meters are observed. A regional relation between seismic moment M 0 and local magnitude M L was derived. The magnitude-frequency distribution of a single longwall in Hamm was studied in detail and shows a maximum at M L = 1.4 corresponding to an estimated characteristic source area of about 2,200 m2. Sandstone layers in the hanging or foot wall of the active longwall might fail in these characteristic events. Source mechanisms can mostly be explained by shear failure of two different types above and below the longwall. Fault plane solutions of typical events are consistent with steeply dipping fracture planes parallel to the longwall face and nearly vertical dislocation in direction towards the goaf. We also derive an empirical relation for the decay of ground velocity with epicenter distance and compare maximum observed ground velocity to local magnitude. This is of considerable public interest because about 30 events larger than M L ≥ 1.2 are felt each month by people living in the mining regions. Our relations, for example, indicate that an event in Hamm with a peak ground velocity of 6 mm/s which corresponds to a local magnitude M L between 1.7 and 2.3 is likely to be felt within about 2.3 km radius from the event.

  19. Speedup computation of HD-sEMG signals using a motor unit-specific electrical source model.

    PubMed

    Carriou, Vincent; Boudaoud, Sofiane; Laforet, Jeremy

    2018-01-23

    Nowadays, bio-reliable modeling of muscle contraction is becoming more accurate and complex. This increasing complexity induces a significant increase in computation time which prevents the possibility of using this model in certain applications and studies. Accordingly, the aim of this work is to significantly reduce the computation time of high-density surface electromyogram (HD-sEMG) generation. This will be done through a new model of motor unit (MU)-specific electrical source based on the fibers composing the MU. In order to assess the efficiency of this approach, we computed the normalized root mean square error (NRMSE) between several simulations on single generated MU action potential (MUAP) using the usual fiber electrical sources and the MU-specific electrical source. This NRMSE was computed for five different simulation sets wherein hundreds of MUAPs are generated and summed into HD-sEMG signals. The obtained results display less than 2% error on the generated signals compared to the same signals generated with fiber electrical sources. Moreover, the computation time of the HD-sEMG signal generation model is reduced to about 90% compared to the fiber electrical source model. Using this model with MU electrical sources, we can simulate HD-sEMG signals of a physiological muscle (hundreds of MU) in less than an hour on a classical workstation. Graphical Abstract Overview of the simulation of HD-sEMG signals using the fiber scale and the MU scale. Upscaling the electrical source to the MU scale reduces the computation time by 90% inducing only small deviation of the same simulated HD-sEMG signals.

  20. Study on beam geometry and image reconstruction algorithm in fast neutron computerized tomography at NECTAR facility

    NASA Astrophysics Data System (ADS)

    Guo, J.; Bücherl, T.; Zou, Y.; Guo, Z.

    2011-09-01

    Investigations on the fast neutron beam geometry for the NECTAR facility are presented. The results of MCNP simulations and experimental measurements of the beam distributions at NECTAR are compared. Boltzmann functions are used to describe the beam profile in the detection plane assuming the area source to be set up of large number of single neutron point sources. An iterative algebraic reconstruction algorithm is developed, realized and verified by both simulated and measured projection data. The feasibility for improved reconstruction in fast neutron computerized tomography at the NECTAR facility is demonstrated.

Top