Aftershock Decay Rates in the Iranian Plateau
NASA Astrophysics Data System (ADS)
Ommi, S.; Zafarani, H.; Zare, M.
2016-07-01
Motivated by the desire to have more information following the occurrence of damaging events, the main purpose of this article is to study aftershock sequence parameters in the Iranian plateau. To this end, the catalogue of the Iranian earthquakes between 2002 to the end of 2013 has been collected and homogenized among which 15 earthquakes have been selected to study their aftershock decay rates. For different tectonic provinces, the completeness magnitudes ( M c) of the earthquake catalogue have been calculated in different time intervals. Also, the M c variability in spatial and temporal windows has been determined for each selected event. For major Iranian earthquakes, catalogue of aftershocks has been collected thanks to three declustering methods: first, the classical windowing method of Gardner and Knopoff (Bull Seismol Soc Am 64:1363-1367, 1974); second, a modified version of this using spatial windowing based on the Wells and Coppersmith (Bull Seismol Soc Am 84:974-1002, 1994) relations; and third, the Burkhard and Grünthal (Swiss J Geosci 102:149-188, 2009) scheme. Effects of the temporal windows also have been investigated using the time periods of 1 month, 100 days, and 1 year in the declustering method of Gardner and Knopoff (Bull Seismol Soc Am 64:1363-1367, 1974). In the next step, the modified Omori law coefficients have been calculated for the 15 selected earthquakes. The calibrated regional generic model describing the temporal and magnitude distribution of aftershocks is of interest for time-dependent seismic hazard forecasts. The regional characteristics of the aftershock decay rates have been studied for the selected Iranian earthquakes in the Alborz, Zagros and Central Iran regions considering their different seismotectonics regimes. However, due to the lack of sufficient data, no results have been reported for the Kopeh-Dagh and Makran seismotectonic regions.
Aftershock Decay Rates in the Iranian Plateau
NASA Astrophysics Data System (ADS)
Ommi, S.; Zafarani, H.; Zare, M.
2016-04-01
Motivated by the desire to have more information following the occurrence of damaging events, the main purpose of this article is to study aftershock sequence parameters in the Iranian plateau. To this end, the catalogue of the Iranian earthquakes between 2002 to the end of 2013 has been collected and homogenized among which 15 earthquakes have been selected to study their aftershock decay rates. For different tectonic provinces, the completeness magnitudes (M c) of the earthquake catalogue have been calculated in different time intervals. Also, the M c variability in spatial and temporal windows has been determined for each selected event. For major Iranian earthquakes, catalogue of aftershocks has been collected thanks to three declustering methods: first, the classical windowing method of uc(Gardner) and uc(Knopoff) (Bull Seismol Soc Am 64:1363-1367, 1974); second, a modified version of this using spatial windowing based on the uc(Wells) and uc(Coppersmith) (Bull Seismol Soc Am 84:974-1002, 1994) relations; and third, the uc(Burkhard) and uc(Grünthal) (Swiss J Geosci 102:149-188, 2009) scheme. Effects of the temporal windows also have been investigated using the time periods of 1 month, 100 days, and 1 year in the declustering method of uc(Gardner) and uc(Knopoff) (Bull Seismol Soc Am 64:1363-1367, 1974). In the next step, the modified Omori law coefficients have been calculated for the 15 selected earthquakes. The calibrated regional generic model describing the temporal and magnitude distribution of aftershocks is of interest for time-dependent seismic hazard forecasts. The regional characteristics of the aftershock decay rates have been studied for the selected Iranian earthquakes in the Alborz, Zagros and Central Iran regions considering their different seismotectonics regimes. However, due to the lack of sufficient data, no results have been reported for the Kopeh-Dagh and Makran seismotectonic regions.
NASA Astrophysics Data System (ADS)
Adamaki, Aggeliki; Papadimitriou, Eleftheria; Tsaklidis, George; Karakostas, Vassilios
2011-08-01
Aftershock rates seem to follow a power law decay, but the assessment of the aftershock frequency immediately after an earthquake, as well as during the evolution of a seismic excitation remains a demand for the imminent seismic hazard. The purpose of this work is to study the temporal distribution of triggered earthquakes in short time scales following a strong event, and thus a multiple seismic sequence was chosen for this purpose. Statistical models are applied to the 1981 Corinth Gulf sequence, comprising three strong (M = 6.7, M = 6.5, and M = 6.3) events between 24 February and 4 March. The non-homogeneous Poisson process outperforms the simple Poisson process in order to model the aftershock sequence, whereas the Weibull process is more appropriate to capture the features of the short-term behavior, but not the most proper for describing the seismicity in long term. The aftershock data defines a smooth curve of the declining rate and a long-tail theoretical model is more appropriate to fit the data than a rapidly declining exponential function, as supported by the quantitative results derived from the survival function. An autoregressive model is also applied to the seismic sequence, shedding more light on the stationarity of the time series.
Aftershock production rate of driven viscoelastic interfaces
NASA Astrophysics Data System (ADS)
Jagla, E. A.
2014-10-01
We study analytically and by numerical simulations the statistics of the aftershocks generated after large avalanches in models of interface depinning that include viscoelastic relaxation effects. We find in all the analyzed cases that the decay law of aftershocks with time can be understood by considering the typical roughness of the interface and its evolution due to relaxation. In models where there is a single viscoelastic relaxation time there is an exponential decay of the number of aftershocks with time. In models in which viscoelastic relaxation is wave-vector dependent we typically find a power-law dependence of the decay rate that is compatible with the Omori law. The factors that determine the value of the decay exponent are analyzed.
Aftershock production rate of driven viscoelastic interfaces.
Jagla, E A
2014-10-01
We study analytically and by numerical simulations the statistics of the aftershocks generated after large avalanches in models of interface depinning that include viscoelastic relaxation effects. We find in all the analyzed cases that the decay law of aftershocks with time can be understood by considering the typical roughness of the interface and its evolution due to relaxation. In models where there is a single viscoelastic relaxation time there is an exponential decay of the number of aftershocks with time. In models in which viscoelastic relaxation is wave-vector dependent we typically find a power-law dependence of the decay rate that is compatible with the Omori law. The factors that determine the value of the decay exponent are analyzed. PMID:25375460
Do aftershock probabilities decay with time?
Michael, Andrew J.
2012-01-01
So, do aftershock probabilities decay with time? Consider a thought experiment in which we are at the time of the mainshock and ask how many aftershocks will occur a day, week, month, year, or even a century from now. First we must decide how large a window to use around each point in time. Let's assume that, as we go further into the future, we are asking a less precise question. Perhaps a day from now means 1 day 10% of a day, a week from now means 1 week 10% of a week, and so on. If we ignore c because it is a small fraction of a day (e.g., Reasenberg and Jones, 1989, hereafter RJ89), and set p = 1 because it is usually close to 1 (its value in the original Omori law), then the rate of earthquakes (K=t) decays at 1=t. If the length of the windows being considered increases proportionally to t, then the number of earthquakes at any time from now is the same because the rate decrease is canceled by the increase in the window duration. Under these conditions we should never think "It's a bit late for this to be an aftershock."
NASA Astrophysics Data System (ADS)
Toda, S.; Stein, R. S.
2013-12-01
The 11 March 2001 M=9.0 Tohoku-oki, Japan, earthquake brought the unprecedented broad increase in seismicity over inland Japan and far offshore. The seismicity rate increase was observed at distances of up to 425 km from the locus of high seismic slip on the megathrust, which roughly corresponds to the areas over 0.1 bar Coulomb stress increase (e.g., Toda et al., 2011). Such stress perturbation in the entire eastern Honshu island gives us a great opportunity to test one of the hypotheses in rate and state friction of Dieterich (1994): aftershock duration (ta) is inversely proportional to fault stressing rate. The Tohoku-oki mainshock indeed started a stopwatch simultaneously for all the off-fault and on-fault aftershocks in various tectonic situations. We have carefully examined the aftershock decays fitting the Omori-Utsu formula in several activated regions, including on the 2011 source fault, several inland areas of Tohoku (Akita, Iwaki, northern Sendai, and Fukushima), Tokyo metropolitan area, Choshi (east of Tokyo), Izu Peninsula, and areas along the most active Itoigawa-Shizuoka Tectonic Line (ISTL) central Honshu. Comparing the regional aftershock decays with the background rates of seismicity estimated from the JMA catalog from 2000 to 2010, we measured ta. One of the extreme short duration was measured at the Izu Peninsula where the heightened seismicity was rapidly toned down to the normal in one month. Overall seismicity in the Tohoku mainshock zone has been mostly closing to normal in 2 - 3 years. Both regions are characterized by high loading rate due to plate collision and subduction. Seismicity beneath Tokyo, also characterized by complex plate interfaces and brought average 1 bar closer to failure, has not followed the simple Omori decay but being settled a new higher rate after a rapid decay. In contrast to these highly deformed regions, current seismicity in slowly loading Tohoku inland regions are still much higher than background rate, which
Self-similar aftershock rates.
Davidsen, Jörn; Baiesi, Marco
2016-08-01
In many important systems exhibiting crackling noise-an intermittent avalanchelike relaxation response with power-law and, thus, self-similar distributed event sizes-the "laws" for the rate of activity after large events are not consistent with the overall self-similar behavior expected on theoretical grounds. This is particularly true for the case of seismicity, and a satisfying solution to this paradox has remained outstanding. Here, we propose a generalized description of the aftershock rates which is both self-similar and consistent with all other known self-similar features. Comparing our theoretical predictions with high-resolution earthquake data from Southern California we find excellent agreement, providing particularly clear evidence for a unified description of aftershocks and foreshocks. This may offer an improved framework for time-dependent seismic hazard assessment and earthquake forecasting. PMID:27627324
Decay of aftershock density with distance indicates triggering by dynamic stress.
Felzer, K R; Brodsky, E E
2006-06-01
The majority of earthquakes are aftershocks, yet aftershock physics is not well understood. Many studies suggest that static stress changes trigger aftershocks, but recent work suggests that shaking (dynamic stresses) may also play a role. Here we measure the decay of aftershocks as a function of distance from magnitude 2-6 mainshocks in order to clarify the aftershock triggering process. We find that for short times after the mainshock, when low background seismicity rates allow for good aftershock detection, the decay is well fitted by a single inverse power law over distances of 0.2-50 km. The consistency of the trend indicates that the same triggering mechanism is working over the entire range. As static stress changes at the more distant aftershocks are negligible, this suggests that dynamic stresses may be triggering all of these aftershocks. We infer that the observed aftershock density is consistent with the probability of triggering aftershocks being nearly proportional to seismic wave amplitude. The data are not fitted well by models that combine static stress change with the evolution of frictionally locked faults. PMID:16760974
Decay of aftershock density with distance indicates triggering by dynamic stress
Felzer, K.R.; Brodsky, E.E.
2006-01-01
The majority of earthquakes are aftershocks, yet aftershock physics is not well understood. Many studies suggest that static stress changes trigger aftershocks, but recent work suggests that shaking (dynamic stresses) may also play a role. Here we measure the decay of aftershocks as a function of distance from magnitude 2-6 mainshocks in order to clarify the aftershock triggering process. We find that for short times after the mainshock, when low background seismicity rates allow for good aftershock detection, the decay is well fitted by a single inverse power law over distances of 0.2-50 km. The consistency of the trend indicates that the same triggering mechanism is working over the entire range. As static stress changes at the more distant aftershocks are negligible, this suggests that dynamic stresses may be triggering all of these aftershocks. We infer that the observed aftershock density is consistent with the probability of triggering aftershocks being nearly proportional to seismic wave amplitude. The data are not fitted well by models that combine static stress change with the evolution of frictionally locked faults. ?? 2006 Nature Publishing Group.
NASA Astrophysics Data System (ADS)
Javed, F.; Hainzl, S.; Aoudia, A.; Qaisar, M.
2016-05-01
We model the spatial and temporal evolution of October 8, 2005 Kashmir earthquake's aftershock activity using the rate-and-state dependent friction model incorporating uncertainties in computed coseismic stress perturbations. We estimated the best possible value for frictional resistance " Aσ n", background seismicity rate " r" and coefficient of stress variation "CV" using maximum log-likelihood method. For the whole Kashmir earthquake sequence, we measure a frictional resistance Aσ n ~ 0.0185 MPa, r ~ 20 M3.7+ events/year and CV = 0.94 ± 0.01. The spatial and temporal forecasted seismicity rate of modeled aftershocks fits well with the spatial and temporal distribution of observed aftershocks that occurred in the regions with positive static stress changes as well as in the apparent stress shadow region. To quantify the effect of secondary aftershock triggering, we have re-run the estimations for 100 stochastically declustered catalogs showing that the effect of aftershock-induced secondary stress changes is obviously minor compared to the overall uncertainties, and that the stress variability related to uncertain slip model inversions and receiver mechanisms remains the major factor to provide a reasonable data fit.
Ogata, Y.; Jones, L.M.; Toda, S.
2003-01-01
Seismic quiescence has attracted attention as a possible precursor to a large earthquake. However, sensitive detection of quiescence requires accurate modeling of normal aftershock activity. We apply the epidemic-type aftershock sequence (ETAS) model that is a natural extension of the modified Omori formula for aftershock decay, allowing further clusters (secondary aftershocks) within an aftershock sequence. The Hector Mine aftershock activity has been normal, relative to the decay predicted by the ETAS model during the 14 months of available data. In contrast, although the aftershock sequence of the 1992 Landers earthquake (M = 7.3), including the 1992 Big Bear earthquake (M = 6.4) and its aftershocks, fits very well to the ETAS up until about 6 months after the main shock, the activity showed clear lowering relative to the modeled rate (relative quiescence) and lasted nearly 7 years, leading up to the Hector Mine earthquake (M = 7.1) in 1999. Specifically, the relative quiescence occurred only in the shallow aftershock activity, down to depths of 5-6 km. The sequence of deeper events showed clear, normal aftershock activity well fitted to the ETAS throughout the whole period. We argue several physical explanations for these results. Among them, we strongly suspect aseismic slips within the Hector Mine rupture source that could inhibit the crustal relaxation process within "shadow zones" of the Coulomb's failure stress change. Furthermore, the aftershock activity of the 1992 Joshua Tree earthquake (M = 6.1) sharply lowered in the same day of the main shock, which can be explained by a similar scenario.
Static stress triggering explains the empirical aftershock distance decay
NASA Astrophysics Data System (ADS)
Hainzl, Sebastian; Moradpour, Javad; Davidsen, Jörn
2014-12-01
The shape of the spatial aftershock decay is sensitive to the triggering mechanism and thus particularly useful for discriminating between static and dynamic stress triggering. For California seismicity, it has been recently recognized that its form is more complicated than typically assumed consisting of three different regimes with transitions at the scale of the rupture length and the thickness of the crust. The intermediate distance range is characterized by a relative small decay exponent of 1.35 previously declared to relate to dynamic stress triggering. We perform comprehensive simulations of a simple clock-advance model, in which the number of aftershocks is just proportional to the Coulomb-stress change, to test whether the empirical result can be explained by static stress triggering. Similarly to the observations, the results show three scaling regimes. For simulations adapted to the depths and focal mechanisms observed in California, we find a remarkable agreement with the observation over the whole distance range for a fault distribution with fractal dimension of 1.8, which is shown to be in good agreement with an independent analysis of California seismicity.
Aftershock triggering by postseismic stresses: A study based on Coulomb rate-and-state models
NASA Astrophysics Data System (ADS)
Cattania, Camilla; Hainzl, Sebastian; Wang, Lifeng; Enescu, Bogdan; Roth, Frank
2015-04-01
The spatiotemporal clustering of earthquakes is a feature of medium- and short-term seismicity, indicating that earthquakes interact. However, controversy exists about the physical mechanism behind aftershock triggering: static stress transfer and reloading by postseismic processes have been proposed as explanations. In this work, we use a Coulomb rate-and-state model to study the role of coseismic and postseismic stress changes on aftershocks and focus on two processes: creep on the main shock fault plane (afterslip) and secondary aftershock triggering by previous aftershocks. We model the seismic response to Coulomb stress changes using the Dieterich constitutive law and focus on two events: the Parkfield, Mw = 6.0, and the Tohoku, Mw = 9.0, earthquakes. We find that modeling secondary triggering systematically improves the maximum log likelihood fit of the sequences. The effect of afterslip is more subtle and difficult to assess for near-fault events, where model errors are largest. More robust conclusions can be drawn for off-fault aftershocks: following the Tohoku earthquake, afterslip promotes shallow crustal seismicity in the Fukushima region. Simple geometrical considerations indicate that afterslip-induced stress changes may have been significant on trench parallel crustal fault systems following several of the largest recorded subduction earthquakes. Moreover, the time dependence of afterslip strongly enhances its triggering potential: seismicity triggered by an instantaneous stress change decays more quickly than seismicity triggered by gradual loading, and as a result we find afterslip to be particularly important between few weeks and few months after the main shock.
Decay of aftershock density with distance does not indicate triggering by dynamic stress
Richards-Dinger, K.; Stein, R.S.; Toda, S.
2010-01-01
Resolving whether static or dynamic stress triggers most aftershocks and subsequent mainshocks is essential to understand earthquake interaction and to forecast seismic hazard. Felzer and Brodsky examined the distance distribution of earthquakes occurring in the first five minutes after 2 ≤ M M M ≥ 2 aftershocks showed a uniform power-law decay with slope −1.35 out to 50 km from the mainshocks. From this they argued that the distance decay could be explained only by dynamic triggering. Here we propose an alternative explanation for the decay, and subject their hypothesis to a series of tests, none of which it passes. At distances more than 300 m from the 2 ≤ M< 3 mainshocks, the seismicity decay 5 min before the mainshocks is indistinguishable from the decay five minutes afterwards, indicating that the mainshocks have no effect at distances outside their static triggering range. Omori temporal decay, the fundamental signature of aftershocks, is absent at distances exceeding 10 km from the mainshocks. Finally, the distance decay is found among aftershocks that occur before the arrival of the seismic wave front from the mainshock, which violates causality. We argue that Felzer and Brodsky implicitly assume that the first of two independent aftershocks along a fault rupture triggers the second, and that the first of two shocks in a creep- or intrusion-driven swarm triggers the second, when this need not be the case.
Decay of aftershock density with distance does not indicate triggering by dynamic stress.
Richards-Dinger, Keith; Stein, Ross S; Toda, Shinji
2010-09-30
Resolving whether static or dynamic stress triggers most aftershocks and subsequent mainshocks is essential to understand earthquake interaction and to forecast seismic hazard. Felzer and Brodsky examined the distance distribution of earthquakes occurring in the first five minutes after 2 ≤ M < 3 and 3 ≤ M < 4 mainshocks and found that their magnitude M ≥ 2 aftershocks showed a uniform power-law decay with slope -1.35 out to 50 km from the mainshocks. From this they argued that the distance decay could be explained only by dynamic triggering. Here we propose an alternative explanation for the decay, and subject their hypothesis to a series of tests, none of which it passes. At distances more than 300 m from the 2 ≤ M < 3 mainshocks, the seismicity decay 5 min before the mainshocks is indistinguishable from the decay five minutes afterwards, indicating that the mainshocks have no effect at distances outside their static triggering range. Omori temporal decay, the fundamental signature of aftershocks, is absent at distances exceeding 10 km from the mainshocks. Finally, the distance decay is found among aftershocks that occur before the arrival of the seismic wave front from the mainshock, which violates causality. We argue that Felzer and Brodsky implicitly assume that the first of two independent aftershocks along a fault rupture triggers the second, and that the first of two shocks in a creep- or intrusion-driven swarm triggers the second, when this need not be the case. PMID:20882015
Parsons, T.
2002-01-01
Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ??? 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occured near (defined as having shear stress change ???????? ??? 0.01 MPa) the Ms ??? 7.0 shocks are associated with calculated shear stress increases, while ???39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ???7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristics rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ??? 7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.
Parsons, Tom
2002-01-01
Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ≥ 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near (defined as having shear stress change ∣Δτ∣ ≥ 0.01 MPa) the Ms ≥ 7.0 shocks are associated with calculated shear stress increases, while ∼39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ∼7–11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristic rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ≥ 7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.
NASA Astrophysics Data System (ADS)
Parsons, Tom
2002-09-01
Triggered earthquakes can be large, damaging, and lethal as evidenced by the1999 shocks in Turkey and the 2001 earthquakes in El Salvador. In this study, earthquakes with Ms ≥ 7.0 from the Harvard centroid moment tensor (CMT) catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near (defined as having shear stress change ∣Δτ∣ ≥ 0.01 MPa) the Ms ≥ 7.0 shocks are associated with calculated shear stress increases, while ˜39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, these triggered earthquakes obey an Omori law rate decay that lasts between ˜7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main shock centroid. Omori's law is one of the few time-predictable patterns evident in the global occurrence of earthquakes. If large triggered earthquakes habitually obey Omori's law, then their hazard can be more readily assessed. The characteristic rate change with time and spatial distribution can be used to rapidly assess the likelihood of triggered earthquakes following events of Ms ≥ 7.0. I show an example application to the M = 7.7 13 January 2001 El Salvador earthquake where use of global statistics appears to provide a better rapid hazard estimate than Coulomb stress change calculations.
Implications of Secondary Aftershocks for Failure Processes
NASA Astrophysics Data System (ADS)
Gross, S. J.
2001-12-01
When a seismic sequence with more than one mainshock or an unusually large aftershock occurs, there is a compound aftershock sequence. The secondary aftershocks need not have exactly the same decay as the primary sequence, with the differences having implications for the failure process. When the stress step from the secondary mainshock is positive but not large enough to cause immediate failure of all the remaining primary aftershocks, failure processes which involve accelerating slip will produce secondary aftershocks that decay more rapidly than primary aftershocks. This is because the primary aftershocks are an accelerated version of the background seismicity, and secondary aftershocks are an accelerated version of the primary aftershocks. Real stress perturbations may be negative, and heterogeneities in mainshock stress fields mean that the real world situation is quite complicated. I will first describe and verify my picture of secondary aftershock decay with reference to a simple numerical model of slipping faults which obeys rate and state dependent friction and lacks stress heterogeneity. With such a model, it is possible to generate secondary aftershock sequences with perturbed decay patterns, quantify those patterns, and develop an analysis technique capable of correcting for the effect in real data. The secondary aftershocks are defined in terms of frequency linearized time s(T), which is equal to the number of primary aftershocks expected by a time T, $ s ≡ ∫ t=0T n(t) dt, where the start time t=0 is the time of the primary aftershock, and the primary aftershock decay function n(t) is extrapolated forward to the times of the secondary aftershocks. In the absence of secondary sequences the function s(T)$ re-scales the time so that approximately one event occurs per new time unit; the aftershock sequence is gone. If this rescaling is applied in the presence of a secondary sequence, the secondary sequence is shaped like a primary aftershock sequence
Can current New Madrid seismicity be explained as a decaying aftershock sequence?
NASA Astrophysics Data System (ADS)
Page, M. T.; Hough, S. E.; Felzer, K. R.
2012-12-01
It has been suggested that continuing seismicity in the New Madrid, central U.S. region is primarily composed of the continuing long-lived aftershock sequence of the 1811-1812 sequence, and thus cannot be taken as an indication of present-day strain accrual in the region. We examine historical and instrumental seismicity in the New Madrid region to determine if such a model is feasible given 1) the observed protracted nature of past New Madrid sequences, with multiple mainshocks with apparently similar magnitudes; 2) the rate of historically documented early aftershocks from the 1811-1812 sequence; and 3) plausible mainshock magnitudes and aftershock-productivity parameters. We use ETAS modeling to search for sub-critical sets of direct Omori parameters that are consistent with all of these datasets, given a realistic consideration of their uncertainties, and current seismicity in the region. The results of this work will help to determine whether or not future sequences are likely to be clusters of events like those in the past, a key issue for earthquake response planning.
Insights on earthquake triggering processes from early aftershocks of repeating microearthquakes
NASA Astrophysics Data System (ADS)
Lengliné, O.; Ampuero, J.-P.
2015-10-01
Characterizing the evolution of seismicity rate of early aftershocks can yield important information about earthquake nucleation and triggering. However, this task is challenging because early aftershock seismic signals are obscured by those of the mainshock. Previous studies of early aftershocks employed high-pass filtering and template matching but had limited performance and completeness at very short times. Here we take advantage of repeating events previously identified on the San Andreas Fault at Parkfield and apply empirical Green's function deconvolution techniques. Both Landweber and sparse deconvolution methods reveal the occurrence of aftershocks as early as few tenths of a second after the mainshock. These events occur close to their mainshock, within one to two rupture lengths away. The aftershock rate derived from this enhanced catalog is consistent with Omori's law, with no flattening of the aftershock rate down to the shortest resolvable timescale ˜0.3 s. The early aftershock rate decay determined here matches seamlessly the decay at later times derived from the original earthquake catalog, yielding a continuous aftershock decay over timescales spanning nearly 8 orders of magnitude. Aftershocks of repeating microearthquakes may hence be governed by the same mechanisms from the earliest time resolved here, up to the end of the aftershock sequence. Our results suggest that these early aftershocks are triggered by relatively large stress perturbations, possibly induced by aseismic afterslip with very short characteristic time. Consistent with previous observations on bimaterial faults, the relative location of early aftershocks shows asymmetry along strike, persistent over long periods.
Model for the Distribution of Aftershock Interoccurrence Times
Shcherbakov, Robert; Yakovlev, Gleb; Rundle, John B.; Turcotte, Donald L.
2005-11-18
In this work the distribution of interoccurrence times between earthquakes in aftershock sequences is analyzed and a model based on a nonhomogeneous Poisson (NHP) process is proposed to quantify the observed scaling. In this model the generalized Omori's law for the decay of aftershocks is used as a time-dependent rate in the NHP process. The analytically derived distribution of interoccurrence times is applied to several major aftershock sequences in California to confirm the validity of the proposed hypothesis.
Chan, C.-H.; Stein, R.S.
2009-01-01
We explore how Coulomb stress transfer and viscoelastic relaxation control afterslip and aftershocks in a continental thrust fault system. The 1999 September 21 Mw = 7.6 Chi-Chi shock is typical of continental ramp-d??collement systems throughout the world, and so inferences drawn from this uniquely well-recorded event may be widely applicable. First, we find that the spatial and depth distribution of aftershocks and their focal mechanisms are consistent with the calculated Coulomb stress changes imparted by the coseismic rupture. Some 61 per cent of the M ??? 2 aftershocks and 83 per cent of the M ??? 4 aftershocks lie in regions for which the Coulomb stress increased by ???0.1 bars, and there is a 11-12 per cent gain in the percentage of aftershocks nodal planes on which the shear stress increased over the pre-Chi Chi control period. Second, we find that afterslip occurred where the calculated coseismic stress increased on the fault ramp and d??collement, subject to the condition that friction is high on the ramp and low on the d??collement. Third, viscoelastic relaxation is evident from the fit of the post-seismic GPS data on the footwall. Fourth, we find that the rate of seismicity began to increase during the post-seismic period in an annulus extending east of the main rupture. The spatial extent of the seismicity annulus resembles the calculated ???0.05-bar Coulomb stress increase caused by viscoelastic relaxation and afterslip, and we find a 9-12 per cent gain in the percentage of focal mechanisms with >0.01-bar shear stress increases imparted by the post-seismic afterslip and relaxation in comparison to the control period. Thus, we argue that post-seismic stress changes can for the first time be shown to alter the production of aftershocks, as judged by their rate, spatial distribution, and focal mechanisms. ?? Journal compilation ?? 2009 RAS.
Modeling aftershocks as a stretched exponential relaxation
NASA Astrophysics Data System (ADS)
Mignan, A.
2015-11-01
The decay rate of aftershocks has been modeled as a power law since the pioneering work of Omori in the late nineteenth century. Although other expressions have been proposed in recent decades to describe the temporal behavior of aftershocks, the number of model comparisons remains limited. After reviewing the aftershock models published from the late nineteenth century until today, I solely compare the power law, pure exponential and stretched exponential expressions defined in their simplest forms. By applying statistical methods recommended recently in applied mathematics, I show that all aftershock sequences tested in three regional earthquake catalogs (Southern and Northern California, Taiwan) and with three declustering techniques (nearest-neighbor, second-order moment, window methods) follow a stretched exponential instead of a power law. These results infer that aftershocks are due to a simple relaxation process, in accordance with most other relaxation processes observed in Nature.
A generalized law for aftershock rates in a damage rheology model
NASA Astrophysics Data System (ADS)
Ben Zion, Y.; Lyakhovsky, V.
2003-12-01
Aftershocks are the response of a damaged rock surrounding large earthquake ruptures to the stress perturbations produced by the large events. Lyakhovsky et al. [JGR, 1997] developed a damage rheology model that provides a quantitative treatment for macroscopic effects of evolving distributed cracking with local density represented by a state variable a. The equation for damage evolution, based on the balance equations of energy and entropy and generalization of linear elasticity, accounts for both degradation and healing as a function of the existing strain tensor and material properties that may be constrained by lab data (rate coefficients and ratio of strain invariants separating states of degradation and healing). Analyses of stress-strain and acoustic emission laboratory data during deformation leading to brittle failure indicate further [Liu et al., AGU, F01; Hamiel et al., this meeting] that the fit between model predictions and observations improves if we also incorporate gradual accumulation of a non-reversible deformation with a rate proportional to the rate of damage increase. For analysis of aftershocks, we consider the relaxation process of a material following the application of a strain step associated with the occurrence of a mainshock. The coupled differential equations governing the damage evolution and stress relaxation can be written in non-dimensional form by scaling the elastic stress to its initial value and the time to characteristic time of damage evolution td. With this, the system behavior is controlled by a single non-dimensional ratio R = td/tM representing the ratio between the damage time scale to the Maxwell relaxation time tM. For very small R there is no relaxation and the response consists of constant rate of damage increase until failure. For very large R there is rapid relaxation without significant change to the level of damage. For intermediate cases the equations are strongly coupled and nonlinear. The analytical solution
NASA Astrophysics Data System (ADS)
Cocco, M.; Hainzl, S.; Woessner, J.; Enescu, B.; Catalli, F.; Lombardi, A.
2009-12-01
space. Second, we demonstrate that all model parameters are strongly correlated for physical and statistical reasons. We discuss this correlation emphasizing that the estimations of the background seismicity rate, stressing rate and Aσ parameter are strongly correlated to reproduce the observed aftershock productivity. Our results demonstrate the impact of these model parameters on the Omori-like aftershock decay (the c-value and the productivity of the Omori law), implying a p-value smaller or equal to 1. Finally, we discuss an optimal strategy to constrain model parameters for near-real time forecasts. Our case studies demonstrate that accounting for realistic uncertainties in stress changes as well as for the correlation among model parameters strongly improves the forecasting performances, although the original deterministic approach is converted into a statistical method.
NASA Astrophysics Data System (ADS)
Narteau, C.; Shebalin, P.; Holschneider, M.; Schorlemmer, D.
2009-04-01
In the Limited Power Law model (LPL) we consider that after a triggering event - the so-called mainshock - rocks subject to sufficiently large differential stress can fail spontaneously by static fatigue. Then, earlier aftershocks occur in zones of highest stress and the c-value, i.e. the delay before the onset of the power-law aftershock decay rate, depends on the amplitude of the stress perturbation in the aftershock zone. If we assume that this stress perturbation is proportional to the absolute level of stress in the area, the model also predicts that shorter delay occur in zones of higher stress. Here, we present two analyses that support such a prediction. In these analyses, we use only aftershocks of 2.5 < M < 4.5 earthquakes to avoid well-known artifacts resulting from overlapping records. First, we analyze the c-value across different types of faulting in southern California to compare with the differential shear stress predicted by a Mohr-Coulomb failure criterion. As expected, we find that the c-value is on average shorter for thrust earthquakes (high stress) than for normal ones (low stress), taking intermediate values for strike-slip earthquakes (intermediate stress). Second, we test the hypothesis that large earthquakes occur in zones where the level of stress is abnormally high. Instead of the c-value we use the < t >-value, the geometric average of early aftershock times. One more time, we observed that M > 5 earthquakes occur where and when the < t >-value is small. This effect is even stronger for M > 6 earthquakes.
How Long is an Aftershock Sequence?
NASA Astrophysics Data System (ADS)
Godano, Cataldo; Tramelli, Anna
2016-06-01
The occurrence of a mainschok is always followed by aftershocks spatially distributed within the fault area. The aftershocks rate decay with time is described by the empirical Omori law which was inferred by catalogues analysis. The sequences discrimination within catalogues is not a straightforward operation, especially for low-magnitude mainshocks. Here, we describe the rate decay of the Omori law obtained using different sequence discrimination tools and we discover that, when the background seismicity is excluded, the sequences tend to last for the temporal extension of the catalogue.
How Long is an Aftershock Sequence?
NASA Astrophysics Data System (ADS)
Godano, Cataldo; Tramelli, Anna
2016-07-01
The occurrence of a mainschok is always followed by aftershocks spatially distributed within the fault area. The aftershocks rate decay with time is described by the empirical Omori law which was inferred by catalogues analysis. The sequences discrimination within catalogues is not a straightforward operation, especially for low-magnitude mainshocks. Here, we describe the rate decay of the Omori law obtained using different sequence discrimination tools and we discover that, when the background seismicity is excluded, the sequences tend to last for the temporal extension of the catalogue.
NASA Astrophysics Data System (ADS)
Gasperini, Paolo; Lolli, Barbara
2006-06-01
We analyzed the correlations among the parameters of the Reasenberg and Jones [Reasenberg, P.A., Jones, L.M., 1989. Earthquake hazard after a mainshock in California, Science 243, 1173-1176] formula describing the aftershock rate after a mainshock as a function of time and magnitude, on the basis of parameter estimates made in previous works for New Zealand, Italy and California. For all of three datasets we found that the magnitude-independent productivity a is significantly correlated with the b-value of the Gutenberg-Richter law and, in some cases, with parameters p and c of the modified Omori's law. We also found significant correlations between p and c but, different from some previous works, not between p and b. We verified that assuming a coefficient for mainshock magnitude α ≈ 2/3 b (instead of b) removes the correlation between a and b and improves the ability to forecast the behavior of Italian sequences occurred from 1997 to 2003 on the basis of average parameters estimated from sequences occurred from 1981 to 1996. This assumption well agrees with direct α estimates made in the framework of an epidemic type model (ETAS) from the data of some large Italian sequences. Our results suggest a modification of the original Reasenberg and Jones (1989) formulation leading to predict lower rates (and probabilities) for stronger mainshocks and conversely higher rates for weaker ones. We also inferred that the correlation of a with p and c might be the consequence of the trade-off between the two parameters of the modified Omori's law. In this case the correlation can be partially removed by renormalizing the time-dependent part of the rate equation. Finally, the absence of correlation between p and b, observed for all the examined datasets, indicates that such correlation, previously inferred from theoretical considerations and empirical results in some regions, does not represent a common property of aftershock sequences in different part of the world.
Forecasting magnitude, time, and location of aftershocks for aftershock hazard
NASA Astrophysics Data System (ADS)
Chen, K.; Tsai, Y.; Huang, M.; Chang, W.
2011-12-01
In this study we investigate the spatial and temporal seismicity parameters of the aftershock sequence accompanying the 17:47 20 September 1999 (UTC) 7.45 Chi-Chi earthquake Taiwan. Dividing the epicentral zone into north of the epicenter, at the epicenter, and south of the epicenter, it is found that immediately after the earthquake the area close by the epicenter had a lower value than both the northern and southern sections. This pattern suggests that at the time of the Chi-Chi earthquake, the area close by the epicenter remained prone to large magnitude aftershocks and strong shaking. However, with time the value increases. An increasing value indicates a reduced likelihood of large magnitude aftershocks. The study also shows that the value is higher at the southern section of the epicentral zone, indicating a faster rate of decay in this section. The primary purpose of this paper is to design a predictive model for forecasting the magnitude, time, and location of aftershocks to large earthquakes. The developed model is presented and applied to the 17:47 20 September 1999 7.45 Chi-Chi earthquake Taiwan, and the 09:32 5 November 2009 (UTC) Nantou 6.19, and 00:18 4 March 2010 (UTC) Jiashian 6.49 earthquake sequences. In addition, peak ground acceleration trends for the Nantou and Jiashian aftershock sequences are predicted and compared to actual trends. The results of the estimated peak ground acceleration are remarkably similar to calculations from recorded magnitudes in both trend and level. To improve the predictive skill of the model for occurrence time, we use an empirical relation to forecast the time of aftershocks. The empirical relation improves time prediction over that of random processes. The results will be of interest to seismic mitigation specialists and rescue crews. We apply also the parameters and empirical relation from Chi-Chi aftershocks of Taiwan to forecast aftershocks with magnitude M > 6.0 of 05:46 11 March 2011 (UTC) Tohoku 9
NASA Astrophysics Data System (ADS)
Hainzl, S.; Fischer, T.; Čermáková, H.; Bachura, M.; Vlček, J.
2016-04-01
The West Bohemia/Vogtland region, central Europe, is well known for its repeating swarm activity. However, the latest activity in 2014, although spatially overlapping with previous swarm activity, consisted of three classical aftershock sequences triggered by ML3.5, 4.4, and 3.5 events. To decode the apparent system change from swarm-type to mainshock-aftershock characteristics, we have analyzed the details of the major ML4.4 sequence based on focal mechanisms and relocated earthquake data. Our analysis shows that the mainshock occurred with rotated mechanism in a step over region of the fault plane, unfavorably oriented to the regional stress field. Most of its intense aftershock activity occurred in-plane with classical characteristics such as (i) the maximum magnitude of the aftershocks is significantly less than the mainshock magnitude and (ii) the decay can be well fitted by the Omori-Utsu law. However, the absolute number of aftershocks and the fitted Omori-Utsu c and p parameters are much larger than for typical sequences. By means of the epidemic-type aftershock sequence model, we show that an additional aseismic source with an exponentially decaying strength triggered a large fraction of the aftershocks. Corresponding pore pressure simulations with an exponentially decreasing flow rate of the fluid source show a good agreement with the observed spatial migration front of the aftershocks extending approximately with log(t). Thus, we conclude that the mainshock opened fluid pathways from a finite fluid source into the fault plane explaining the unusual high rate of aftershocks, the migration patterns, and the exponential decrease of the aseismic signal.
Evolution of aftershock statistics with depth
NASA Astrophysics Data System (ADS)
Narteau, C.; Shebalin, P.; Holschneider, M.
2013-12-01
The deviatoric stress varies with depth and may strongly affect earthquake statistics. Nevertheless, if the Anderson faulting theory may be used to define the relative stress magnitudes, it remains extremely difficult to observe significant variations of earthquake properties from the top to the bottom of the seismogenic layer. Here, we concentrate on aftershock sequences in normal, strike-slip and reverse faulting regimes to isolate specific temporal properties of this major relaxation process with respect to depth. More exactly, we use Bayesian statistics of the Modified Omori Law to characterize the exponent p of the power-law aftershock decay rate and the duration c of the early stage of aftershock activity that does not fit with this power-law regime. Preliminary results show that the c-value decreases with depth without any significant variation of the p-value. Then, we infer the duration of a non power-law aftershock decay rate over short times can be related to the level of stress in the seismogenic crust.
Leading aftershocks and cascades: two possible stress release processes after a main shock
NASA Astrophysics Data System (ADS)
Monterrubio, Marisol; Martinez, Maria-Dolors; Lana, Xavier
2010-05-01
Three series of aftershocks in Southern California, associated with the main shocks of Landers (1992), Northridge (1994) and Hector Mine (1999), are interpreted as the superposition of a lasting relaxation stress process and numerous short episodes of sudden stress release. The set of aftershocks belonging to the lasting process are designed as leading aftershocks and its rate decays with time, fitting well to the classical Omori's law. The remaining aftershocks are assigned to the different episodes characterised by sudden release of stresses, each of them being designed as a cascade. Cascades are characterised by four basic properties. First, the number of aftershocks belonging to a cascade is submitted to remarkable time fluctuations. Nevertheless, it is observed a positive trend in the number of aftershocks with respect to the elapsed time measured since the origin time of the main event. Second, the rate for aftershocks belonging to a cascade can be assumed constant. Third, a power law quantifies the rate for every cascade, with the elapsed time since the main event to the beginning of the cascade being the argument of this power law. Fourth, the validity of the Gutemberg-Richter law is preserved both for the set of leading aftershocks as for the set of tremors associated to cascades. Given that the number of available aftershocks for the three seismic crisis is very high (exceeding 10,000 tremors), a detailed analysis of cascades is available.
NASA Astrophysics Data System (ADS)
Stein, R. S.; Toda, S.
2014-12-01
A fundamental problem confronting hazard modelers in slowly deforming regions such as the central and eastern United States, Australia, and inner Honshu, is whether the current seismicity represents the steady state earthquake potential, or is instead a decaying potential associated with past mainshocks. If the current seismicity were composed of long-lived aftershock sequences, it might then be anti-correlated with the next large earthquakes. While aftershock productivity is known to be a property of the mainshock magnitude, aftershock duration (the time until the aftershock rate decays to the pre-mainshock rate) should, according to rate/state friction theory of Dieterich[1994], be inversely proportional to the fault stressing rate. If so, slowly deforming regions would be expected to sustain long aftershock sequences. Most tests have supported the Dieterich hypothesis, but use ambiguous proxies for the fault stressing rate, such as the mainshock recurrence interval. Here we test the hypothesis by examining off-fault aftershocks of the 2011 M=9 Tohoku-oki rupture up to 250 km from the source, as well as near-fault aftershocks of six large Japanese mainshocks, sampling a range of receiver faults, from thrusts slipping 80 mm/yr, to normal faults slipping 0.1 mm/yr. We find that aftershock sequences lasted a month on the fastest-slipping faults, have durations of 10-100 years on faults slipping 1-10 mm/yr, and are projected to persist for at least 200 years on the slowest faults. Although the Omori decay exponent for short and long sequences is similar, the very different background rates account for the duration differences. If the stressing rate is generally proportional to fault slip rate, then aftershock durations indeed support the Dieterich hypothesis. The test means that the hazard associated with aftershocks depends on local tectonic conditions rather than on the mainshock magnitude alone. Because declustering approaches do not remove such long
Nonlinear Viscoelastic Stress Transfer As a Possible Aftershock Triggering Mechanism
NASA Astrophysics Data System (ADS)
Zhang, X.; Shcherbakov, R.
2014-12-01
. The stress transfer function of linear viscoelasticity has an time-dependent exponential form, and the corresponding aftershock occurrence rate exhibits an exponential decay. The stress transfer function of nonlinear viscoelasticity has a time-dependent power-law form, this results in a power-law decay of the aftershock occurrence rate.
Ratios of heavy hadron semileptonic decay rates
Gronau, Michael; Rosner, Jonathan L.
2011-02-01
Ratios of charmed meson and baryon semileptonic decay rates appear to be satisfactorily described by considering only the lowest-lying (S-wave) hadronic final states and assuming the kinematic factor describing phase space suppression is the same as that for free quarks. For example, the rate for D{sub s} semileptonic decay is known to be (17.0{+-}5.3)% lower than those for D{sup 0} or D{sup +}, and the model accounts for this difference. When applied to hadrons containing b quarks, this method implies that the B{sub s} semileptonic decay rate is about 1% higher than that of the nonstrange B mesons. This small difference thus suggests surprisingly good local quark-hadron duality for B semileptonic decays, complementing the expectation based on inclusive quark-hadron duality that these differences in rates should not exceed a few tenths of a percent. For {Lambda}{sub b} semileptonic decay, however, the inclusive rate is predicted to be about 13% greater than that of the nonstrange B mesons. This value, representing a considerable departure from a calculation using a heavy-quark expansion, is close to the corresponding experimental ratio {Gamma}({Lambda}{sub b})/{Gamma}(B)=1.13{+-}0.03 of total decay rates.
Aftershock Statistics of the 1999 Chi-Chi, Taiwan Earthquake and the Concept of Omori Times
NASA Astrophysics Data System (ADS)
Lee, Ya-Ting; Turcotte, Donald L.; Rundle, John B.; Chen, Chien-Chih
2013-01-01
In this paper we consider the statistics of the aftershock sequence of the m = 7.65 20 September 1999 Chi-Chi, Taiwan earthquake. We first consider the frequency-magnitude statistics. We find good agreement with Gutenberg-Richter scaling but find that the aftershock level is anomalously high. This level is quantified using the difference in magnitude between the main shock and the largest inferred aftershock {{Updelta}}m^{ *}. Typically, {{Updelta}}m^{ *} is in the range 0.8-1.5, but for the Chi-Chi earthquake the value is {{Updelta}}m^{ *} = 0.03. We suggest that this may be due to an aseismic slow-earthquake component of rupture. We next consider the decay rate of aftershock activity following the earthquake. The rates are well approximated by the modified Omori's law. We show that the distribution of interoccurrence times between aftershocks follow a nonhomogeneous Poisson process. We introduce the concept of Omori times to study the merging of the aftershock activity with the background seismicity. The Omori time is defined to be the mean interoccurrence time over a fixed number of aftershocks.
Exploring aftershock properties with depth using Bayesian statistics
NASA Astrophysics Data System (ADS)
Narteau, Clement; Shebalin, Peter; Holschneider, Matthias
2013-04-01
Stress magnitudes and frictional faulting properties vary with depth and may strongly affect earthquake statistics. Nevertheless, if the Anderson faulting theory may be used to define the relative stress magnitudes, it remains extremely difficult to observe significant variations of earthquake properties from the top to the bottom of the seismogenic layer. Here, we concentrate on aftershock sequences in normal, strike-slip and reverse faulting regimes to isolate specific temporal properties of this major relaxation process with respect to depth. More exactly, we use Bayesian statistics of the Modified Omori Law to characterize the exponent p of the power-law aftershock decay rate and the duration c of the early stage of aftershock activity that does not fit with this power-law regime. Preliminary results show that the c-value decreases with depth without any significant variation of the p-value. Then, we infer the duration of a non power-law aftershock decay rate over short times can be related to the level of stress in the seismogenic crust.
Power spectrum analyses of nuclear decay rates
NASA Astrophysics Data System (ADS)
Javorsek, D.; Sturrock, P. A.; Lasenby, R. N.; Lasenby, A. N.; Buncher, J. B.; Fischbach, E.; Gruenwald, J. T.; Hoft, A. W.; Horan, T. J.; Jenkins, J. H.; Kerford, J. L.; Lee, R. H.; Longman, A.; Mattes, J. J.; Morreale, B. L.; Morris, D. B.; Mudry, R. N.; Newport, J. R.; O'Keefe, D.; Petrelli, M. A.; Silver, M. A.; Stewart, C. A.; Terry, B.
2010-10-01
We provide the results from a spectral analysis of nuclear decay data displaying annually varying periodic fluctuations. The analyzed data were obtained from three distinct data sets: 32Si and 36Cl decays reported by an experiment performed at the Brookhaven National Laboratory (BNL), 56Mn decay reported by the Children's Nutrition Research Center (CNRC), but also performed at BNL, and 226Ra decay reported by an experiment performed at the Physikalisch-Technische Bundesanstalt (PTB) in Germany. All three data sets exhibit the same primary frequency mode consisting of an annual period. Additional spectral comparisons of the data to local ambient temperature, atmospheric pressure, relative humidity, Earth-Sun distance, and their reciprocals were performed. No common phases were found between the factors investigated and those exhibited by the nuclear decay data. This suggests that either a combination of factors was responsible, or that, if it was a single factor, its effects on the decay rate experiments are not a direct synchronous modulation. We conclude that the annual periodicity in these data sets is a real effect, but that further study involving additional carefully controlled experiments will be needed to establish its origin.
Quantifying Early Aftershock Activity of the 2004 Mid Niigata Prefecture Earthquake (Mw6.6)
NASA Astrophysics Data System (ADS)
Enescu, B.; Mori, J.; Miyazawa, M.
2006-12-01
We analyse the early aftershock activity of the 2004 Mid Niigata earthquake, using both earthquake catalog data and continuous waveform recordings. The frequency-magnitude distribution analysis of the Japan Meteorological Agency (JMA) catalog shows that the magnitude of completeness of the aftershocks changes from values around 5.0, immediately after the mainshock, to about 1.8, twelve hours later. Such a large incompleteness of early events can bias significantly the estimation of aftershock rates. To better determine the temporal pattern of aftershocks in the first minutes after the Niigata earthquake, we analyse the continuous seismograms recorded at six Hi-Net (High Sensitivity Seismograph Network) stations located close to the aftershock distribution. Clear aftershocks can be seen from about 35 sec. after the mainshock. We use events which are both identified on the filtered waveforms and are listed in the JMA catalogue, to calibrate an amplitude-magnitude relation. We estimate that the events picked on the waveforms recorded at two seismic stations (NGOH and YNTH), situated on opposite sides of the aftershock distribution, are complete above a threshold magnitude of 3.4. The c-value determined by taking these events into account is about 0.003 days (4.3 min). Statistical tests demonstrate that a small, but non-zero, c-value is a reliable result. We also analyse the decay with time of the moment release rates of the aftershocks in the JMA catalog, since these rates should be much less influenced by the missing small events. The moment rates follow a power-law time dependence from a few minutes to months after the mainshock. We finally show that the rate-and-state dependent friction law or stress corrosion could explain well our findings.
Halley's comet - Its size and decay rate
NASA Astrophysics Data System (ADS)
Wallis, M. K.; Wickramasinghe, N. C.
1985-09-01
The outgassing rates inferred from the 1910 apparition and the brightness decay over the previous two millenia are compatible with the minimum nuclear brightness currently observed if the comet nucleus is small, 1.8 - 2.7 km radius with an albedo of 0.1 - 0.2. Outgassing is faster than from a bare nucleus of dirty H2O-ice, which is attributed either to a hot microdust coma or to an organic polymer composition. Halley's comet will decay away within another 45 - 65 apparitions.
NASA Astrophysics Data System (ADS)
Johnson, C. W.; Totten, E. J.; Burgmann, R.
2015-12-01
To improve understanding of the link between injection/production activity and seismicity, we apply an Epidemic Type Aftershock Sequence (ETAS) model to an earthquake catalog from The Geysers geothermal field (GGF) between 2005-2015 using >140,000 events and Mc 0.8 . We partition the catalog along a northeast-southwest trending divide, which corresponds to regions of high and low levels of enhanced geothermal stimulation (EGS) across the field. The ETAS model is fit to the seismicity data using a 6-month sliding window with a 1-month time step to determine the background seismicity rate. We generate monthly time series of the time-dependent background seismicity rate in 1-km depth intervals from 0-5km. The average wellhead depth is 2-3 km and the background seismicity rates above this depth do not correlate well with field-wide injected masses over the time period of interest. The auto correlation results show a 12-month period for monthly time series proximal to the average wellhead depths (2-3km and 3-4km) for northwest GGF strongly correlates with field-wide fluid injection masses, with a four-month phase shift between the two depth intervals as fluid migrates deeper. This periodicity is not observed for the deeper depth interval of 4-5 km, where monthly background seismicity rates reduce to near zero. Cross-correlation analysis using the monthly time series for background seismicity rate and the field-wide injection, production and net injection (injection minus production) suggest that injection most directly modulates seismicity. Periodicity in the background seismicity is not observed as strongly in the time series for the southeast field. We suggest that the variation in background seismicity rate is a proxy for pore-pressure diffusion of injected fluids at depth. We deduce that the contrast between the background seismicity rates in the northwest and southeast GGF is a result of reduced EGS activity in the southeast region.
International Aftershock Forecasting: Lessons from the Gorkha Earthquake
NASA Astrophysics Data System (ADS)
Michael, A. J.; Blanpied, M. L.; Brady, S. R.; van der Elst, N.; Hardebeck, J.; Mayberry, G. C.; Page, M. T.; Smoczyk, G. M.; Wein, A. M.
2015-12-01
Following the M7.8 Gorhka, Nepal, earthquake of April 25, 2015 the USGS issued a series of aftershock forecasts. The initial impetus for these forecasts was a request from the USAID Office of US Foreign Disaster Assistance to support their Disaster Assistance Response Team (DART) which coordinated US Government disaster response, including search and rescue, with the Government of Nepal. Because of the possible utility of the forecasts to people in the region and other response teams, the USGS released these forecasts publicly through the USGS Earthquake Program web site. The initial forecast used the Reasenberg and Jones (Science, 1989) model with generic parameters developed for active deep continental regions based on the Garcia et al. (BSSA, 2012) tectonic regionalization. These were then updated to reflect a lower productivity and higher decay rate based on the observed aftershocks, although relying on teleseismic observations, with a high magnitude-of-completeness, limited the amount of data. After the 12 May M7.3 aftershock, the forecasts used an Epidemic Type Aftershock Sequence model to better characterize the multiple sources of earthquake clustering. This model provided better estimates of aftershock uncertainty. These forecast messages were crafted based on lessons learned from the Christchurch earthquake along with input from the U.S. Embassy staff in Kathmandu. Challenges included how to balance simple messaging with forecasts over a variety of time periods (week, month, and year), whether to characterize probabilities with words such as those suggested by the IPCC (IPCC, 2010), how to word the messages in a way that would translate accurately into Nepali and not alarm the public, and how to present the probabilities of unlikely but possible large and potentially damaging aftershocks, such as the M7.3 event, which had an estimated probability of only 1-in-200 for the week in which it occurred.
Triggering cascades and statistical properties of aftershocks
NASA Astrophysics Data System (ADS)
Gu, C.; Davidsen, J.
2011-12-01
Applying a recently introduced general statistical procedure for identifying aftershocks based on complex network theory, we investigate the statistical properties of aftershocks for a high-resolution earthquake catalog covering Southern California. In comparison with earlier studies of aftershock sequences, we show that many features depend sensitively on how one defines aftershocks and whether one includes only first-generation of aftershocks or one also takes all indirectly triggered aftershocks into account. This includes the temporal variation in the rate of aftershocks for mainshocks of small magnitude, for example, as well as the variation in the rate of aftershocks for short to intermediate times after a mainshock. Other features are, however, robust indicating that they truly characterize aftershock sequences. These include the p-values in the Omori-Utsu law for large mainshocks, B{aa}th's law and the productivity law with an exponent smaller than the b-value in the Gutenberg-Richter law. We also find that, for large mainshocks, the dependence of the parameters in the Omori-Utsu law on the lower magnitude cut-off are in excellent agreement with a recent proposition based on B{aa}th's law and the Gutenberg-Richter law, giving rise to a generalized Omori-Utsu law. Our analysis also provides evidence that the exponent p in the Omori-Utsu law does not vary significantly with mainshock magnitude.
Studies of the South Napa Earthquake Aftershocks
NASA Astrophysics Data System (ADS)
Turcotte, D. L.; Shcherbakov, R.; Yikilmaz, M. B.; Kellogg, L. H.; Rundle, J. B.
2014-12-01
In this paper we present studies of the aftershock sequence of the 24 August, 2014, M = 6.0 South Napa earthquake. We give the cumulative frequency-magnitude distributions of the aftershocks for several time intervals following the main shock. We give the magnitude of the largest aftershock (Bath's law) as well as the largest aftershock obtained from a Gutenberg-Richter fit to the frequency-magnitude data (modified form of Bath's law). The latter is a measure of the aftershock productivity. We also give the rates of occurrence of aftershocks as a function of time after the main shock for several magnitude ranges. The fit of this data to Omori's law is discussed. We compare the results of our study of the South Napa earthquake with our previous study of the aftershock statistics of the 28 September, 2004, M = 6.0 Parkfield earthquake. Specifically we will discuss any difference that can be attributed to the large difference in recurrence intervals for the two earthquakes. We also present studies of the three dimensional distribution of aftershock locations as a function of time and their association with the surface rupture. Aftershocks at large distances from the rupture zone will be discussed particularly those in the Geysers geothermal area.
Universal Distribution of Litter Decay Rates
NASA Astrophysics Data System (ADS)
Forney, D. C.; Rothman, D. H.
2008-12-01
Degradation of litter is the result of many physical, chemical and biological processes. The high variability of these processes likely accounts for the progressive slowdown of decay with litter age. This age dependence is commonly thought to result from the superposition of processes with different decay rates k. Here we assume an underlying continuous yet unknown distribution p(k) of decay rates [1]. To seek its form, we analyze the mass-time history of 70 LIDET [2] litter data sets obtained under widely varying conditions. We construct a regularized inversion procedure to find the best fitting distribution p(k) with the least degrees of freedom. We find that the resulting p(k) is universally consistent with a lognormal distribution, i.e.~a Gaussian distribution of log k, characterized by a dataset-dependent mean and variance of log k. This result is supported by a recurring observation that microbial populations on leaves are log-normally distributed [3]. Simple biological processes cause the frequent appearance of the log-normal distribution in ecology [4]. Environmental factors, such as soil nitrate, soil aggregate size, soil hydraulic conductivity, total soil nitrogen, soil denitrification, soil respiration have been all observed to be log-normally distributed [5]. Litter degradation rates depend on many coupled, multiplicative factors, which provides a fundamental basis for the lognormal distribution. Using this insight, we systematically estimated the mean and variance of log k for 512 data sets from the LIDET study. We find the mean strongly correlates with temperature and precipitation, while the variance appears to be uncorrelated with main environmental factors and is thus likely more correlated with chemical composition and/or ecology. Results indicate the possibility that the distribution in rates reflects, at least in part, the distribution of microbial niches. [1] B. P. Boudreau, B.~R. Ruddick, American Journal of Science,291, 507, (1991). [2] M
Molecular decay rate near nonlocal plasmonic particles.
Girard, Christian; Cuche, Aurélien; Dujardin, Erik; Arbouet, Arnaud; Mlayah, Adnen
2015-05-01
When the size of metal nanoparticles is smaller than typically 10 nm, their optical response becomes sensitive to both spatial dispersion and quantum size effects associated with the confinement of the conduction electrons inside the particle. In this Letter, we propose a nonlocal scheme to compute molecular decay rates near spherical nanoparticles which includes the electron-electron interactions through a simple model of electronic polarizabilities. The plasmonic particle is schematized by a dynamic dipolar polarizability α(NL)(ω), and the quantum system is characterized by a two-level system. In this scheme, the light matter interaction is described in terms of classical field susceptibilities. This theoretical framework could be extended to address the influence of nonlocality on the dynamics of quantum systems placed in the vicinity of nano-objects of arbitrary morphologies. PMID:25927799
Properties of the Aftershock Sequences of the 2003 Bingöl, M D = 6.4, (Turkey) Earthquake
NASA Astrophysics Data System (ADS)
Öztürk, S.; Çinar, H.; Bayrak, Y.; Karsli, H.; Daniel, G.
2008-02-01
Aftershock sequences of the magnitude M W =6.4 Bingöl earthquake of 1 May, 2003 (Turkey) are studied to analyze the spatial and temporal variability of seismicity parameters of the b value of the frequency-magnitude distribution and the p value describing the temporal decay rate of aftershocks. The catalog taken from the KOERI contains 516 events and one month’s time interval. The b value is found as 1.49 ± 0.07 with Mc =3.2. Considering the error limits, b value is very close to the maximum b value stated in the literature. This larger value may be caused by the paucity of the larger aftershocks with magnitude M D ≥ 5.0. Also, the aftershock area is divided into four parts in order to detect the differences in b value and the changes illustrate the heterogeneity of the aftershock region. The p value is calculated as 0.86 ± 0.11, relatively small. This small p value may be a result of the slow decay rate of the aftershock activity and the small number of aftershocks. For the fitting of a suitable model and estimation of correct values of decay parameters, the sequence is also modeled as a background seismicty rate model. Constant background activity does not appear to be important during the first month of the Bingöl aftershock sequences and this result is coherent with an average estimation of pre-existing seismicity. The results show that usage of simple modified Omori law is reasonable for the analysis. The spatial variability in b value is between 1.2 and 1.8 and p value varies from 0.6 to 1.2. Although the physical interpretation of the spatial variability of these seismicity parameters is not straightforward, the variation of b and p values can be related to the stress and slip distribution after the mainshock, respectively. The lower b values are observed in the high stress regions and to a certain extent, the largest b values are related to Holocene alluvium. The larger p values are found in some part of the aftershock area although no slip occurred
NASA Astrophysics Data System (ADS)
Ichiyanagi, Masayoshi; Takai, Nobuo; Shigefuji, Michiko; Bijukchhen, Subeg; Sasatani, Tsutomu; Rajaure, Sudhir; Dhital, Megh Raj; Takahashi, Hiroaki
2016-02-01
The characteristics of aftershock activity of the 2015 Gorkha, Nepal, earthquake (Mw 7.8) were evaluated. The mainshock and aftershocks were recorded continuously by the international Kathmandu strong motion seismographic array operated by Hokkaido University and Tribhuvan University. Full waveform data without saturation for all events enabled us to clarify aftershock locations and decay characteristics. The aftershock distribution was determined using the estimated local velocity structure. The hypocenter distribution in the Kathmandu metropolitan region was well determined and indicated earthquakes located shallower than 12 km depth, suggesting that aftershocks occurred at depths shallower than the Himalayan main thrust fault. Although numerical investigation suggested less resolution for the depth component, the regional aftershock epicentral distribution of the entire focal region clearly indicated earthquakes concentrated in the eastern margin of the major slip region of the mainshock. The calculated modified Omori law's p value of 1.35 suggests rapid aftershock decay and a possible high temperature structure in the aftershock region.
Spectral scaling of the aftershocks of the Tocopilla 2007 earthquake in northern Chile
NASA Astrophysics Data System (ADS)
Lancieri, M.; Madariaga, R.; Bonilla, F.
2012-04-01
We study the scaling of spectral properties of a set of 68 aftershocks of the 2007 November 14 Tocopilla (M 7.8) earthquake in northern Chile. These are all subduction events with similar reverse faulting focal mechanism that were recorded by a homogenous network of continuously recording strong motion instruments. The seismic moment and the corner frequency are obtained assuming that the aftershocks satisfy an inverse omega-square spectral decay; radiated energy is computed integrating the square velocity spectrum corrected for attenuation at high frequencies and for the finite bandwidth effect. Using a graphical approach, we test the scaling of seismic spectrum, and the scale invariance of the apparent stress drop with the earthquake size. To test whether the Tocopilla aftershocks scale with a single parameter, we introduce a non-dimensional number, ?, that should be constant if earthquakes are self-similar. For the Tocopilla aftershocks, Cr varies by a factor of 2. More interestingly, Cr for the aftershocks is close to 2, the value that is expected for events that are approximately modelled by a circular crack. Thus, in spite of obvious differences in waveforms, the aftershocks of the Tocopilla earthquake are self-similar. The main shock is different because its records contain large near-field waves. Finally, we investigate the scaling of energy release rate, Gc, with the slip. We estimated Gc from our previous estimates of the source parameters, assuming a simple circular crack model. We find that Gc values scale with the slip, and are in good agreement with those found by Abercrombie and Rice for the Northridge aftershocks.
NASA Astrophysics Data System (ADS)
Eddo, J.; Olsen, K.
2007-12-01
Numerous studies have found significant correlation of static Coulomb Failure Stress (sCFS, co-seismic earthquake induced stresses) with the occurrence of mainshocks, aftershocks, and triggered slip (e.g. Stein, 1999; Kilb, 2003; King et al., 1994, Arnadottir, 2003; Du et al., 2003; Freed, 2005). Static CFS estimates are primarily dependent on the final co-seismic slip distribution and fault geometry. Recently, complete or dynamic Coulomb Failure Stress, parameterized by its largest positive value (peak dCFS), has been proposed as an alternative triggering mechanism (Kilb, 2002). Peak dCFS estimates, in addition to the final slip dependence, have been shown to be strongly dependent on co-seismic source effects, such as rupture directivity (Kilb, 2002). However, most studies of stress transfer and earthquake triggering only incorporate sCFS and only a few studies have attempted to correlate seismicity rate change and triggered slip on surrounding faults. In this study we have modeled the distributions of sCFS and peak dCFS for four recent historical earthquakes (1968 M6.7 Borrego Mountain, 1979 M6.6 Imperial Valley, 1987 M6.6 Elmore Ranch, and M6.5 Superstition Hills) using a fourth-order staggered-grid finite-difference method, which incorporates anelastic attenuation, a 3-D velocity model, and heterogeneous slip distributions derived from strong ground-motion and geodetic inversions. The study area is 150 by 150 km located in the Salton Trough of the Imperial Valley, California. A cross-correlation is calculated between the modeled stresses and seismicity rate change in terms of the Z-value (Habermann, 1983) with a background seismicity rate removed. Modeling results show that peak dCFS provides significantly better correlation with aftershock distributions, seismicity rate change, and triggered slip than sCFS for all four events. Both sCFS and peak dCFS provide significant goodness of fit (>55%) with seismicity rate change up to a month after the mainshocks, with
Mechanical origin of aftershocks.
Lippiello, E; Giacco, F; Marzocchi, W; Godano, C; de Arcangelis, L
2015-01-01
Aftershocks are the most striking evidence of earthquake interactions and the physical mechanisms at the origin of their occurrence are still intensively debated. Novel insights stem from recent results on the influence of the faulting style on the aftershock organisation in magnitude and time. Our study shows that the size of the aftershock zone depends on the fault geometry. We find that positive correlations among parameters controlling aftershock occurrence in time, energy and space are a stable feature of seismicity independently of magnitude range and geographic areas. We explain the ensemble of experimental findings by means of a description of the Earth Crust as an heterogeneous elastic medium coupled with a Maxwell viscoelastic asthenosphere. Our results show that heterogeneous stress distribution in an elastic layer combined with a coupling to a viscous flow are sufficient ingredients to describe the physics of aftershock triggering. PMID:26497720
Mechanical origin of aftershocks
NASA Astrophysics Data System (ADS)
Lippiello, E.; Giacco, F.; Marzocchi, W.; Godano, C.; de Arcangelis, L.
2015-10-01
Aftershocks are the most striking evidence of earthquake interactions and the physical mechanisms at the origin of their occurrence are still intensively debated. Novel insights stem from recent results on the influence of the faulting style on the aftershock organisation in magnitude and time. Our study shows that the size of the aftershock zone depends on the fault geometry. We find that positive correlations among parameters controlling aftershock occurrence in time, energy and space are a stable feature of seismicity independently of magnitude range and geographic areas. We explain the ensemble of experimental findings by means of a description of the Earth Crust as an heterogeneous elastic medium coupled with a Maxwell viscoelastic asthenosphere. Our results show that heterogeneous stress distribution in an elastic layer combined with a coupling to a viscous flow are sufficient ingredients to describe the physics of aftershock triggering.
Mechanical origin of aftershocks
Lippiello, E.; Giacco, F.; Marzocchi, W.; Godano, C.; de Arcangelis, L.
2015-01-01
Aftershocks are the most striking evidence of earthquake interactions and the physical mechanisms at the origin of their occurrence are still intensively debated. Novel insights stem from recent results on the influence of the faulting style on the aftershock organisation in magnitude and time. Our study shows that the size of the aftershock zone depends on the fault geometry. We find that positive correlations among parameters controlling aftershock occurrence in time, energy and space are a stable feature of seismicity independently of magnitude range and geographic areas. We explain the ensemble of experimental findings by means of a description of the Earth Crust as an heterogeneous elastic medium coupled with a Maxwell viscoelastic asthenosphere. Our results show that heterogeneous stress distribution in an elastic layer combined with a coupling to a viscous flow are sufficient ingredients to describe the physics of aftershock triggering. PMID:26497720
Tests of remote aftershock triggering by small mainshocks using Taiwan's earthquake catalog
NASA Astrophysics Data System (ADS)
Peng, W.; Toda, S.
2014-12-01
To understand earthquake interaction and forecast time-dependent seismic hazard, it is essential to evaluate which stress transfer, static or dynamic, plays a major role to trigger aftershocks and subsequent mainshocks. Felzer and Brodsky focused on small mainshocks (2≤M<3) and their aftershocks, and then argued that only dynamic stress change brings earthquake-to-earthquake triggering, whereas Richards-Dingers et al. (2010) claimed that those selected small mainshock-aftershock pairs were not earthquake-to-earthquake triggering but simultaneous occurrence of independent aftershocks following a larger earthquake or during a significant swarm sequence. We test those hypotheses using Taiwan's earthquake catalog by taking the advantage of lacking any larger event and the absence of significant seismic swarm typically seen with active volcano. Using Felzer and Brodsky's method and their standard parameters, we only found 14 mainshock-aftershock pairs occurred within 20 km distance in Taiwan's catalog from 1994 to 2010. Although Taiwan's catalog has similar number of earthquakes as California's, the number of pairs is about 10% of the California catalog. It may indicate the effect of no large earthquakes and no significant seismic swarm in the catalog. To fully understand the properties in the Taiwan's catalog, we loosened the screening parameters to earn more pairs and then found a linear aftershock density with a power law decay of -1.12±0.38 that is very similar to the one in Felzer and Brodsky. However, none of those mainshock-aftershock pairs were associated with a M7 rupture event or M6 events. To find what mechanism controlled the aftershock density triggered by small mainshocks in Taiwan, we randomized earthquake magnitude and location. We then found that those density decay in a short time period is more like a randomized behavior than mainshock-aftershock triggering. Moreover, 5 out of 6 pairs were found in a swarm-like temporal seismicity rate increase
Triggered Swarms and Induced Aftershock Sequences in Geothermal Systems
NASA Astrophysics Data System (ADS)
Shcherbakov, R.; Turcotte, D. L.; Yikilmaz, M. B.; Kellogg, L. H.; Rundle, J. B.
2015-12-01
Natural geothermal systems, which are used for energy generation, are usually associated with high seismic activity. This can be related to the large-scale injection and extraction of fluids to enhance geothermal recovery. This results in the changes of the pore pressure and pore-elastic stress field and can stimulate the occurrence of earthquakes. These systems are also prone to triggering of seismicity by the passage of seismic waves generated by large distant main shocks. In this study, we analyze clustering and triggering of seismicity at several geothermal fields in California. Particularly, we consider the seismicity at the Geysers, Coso, and Salton Sea geothermal fields. We analyze aftershock sequences generated by local large events with magnitudes greater than 4.0 and earthquake swarms generated by several significant long distant main shocks. We show that the rate of the aftershock sequences generated by the local large events in the two days before and two days after the reference event can be modelled reasonably well by the time dependent Epidemic Type Aftershock Sequence (ETAS) model. On the other hand, the swarms of activity triggered by large distant earthquakes cannot be described by the ETAS model. To model the increase in the rate of seismicity associated with triggering by large distant main shocks we introduce an additional time-dependent triggering mechanism into the ETAS model. In almost all cases the frequency-magnitude statistics of triggered sequences follow Gutenberg-Richter scaling to a good approximation. The analysis indicates that the seismicity triggered by relatively large local events can initiate sequences similar to regular aftershock sequences. In contrast, the distant main shocks trigger swarm like activity with faster decaying rates.
Foreshock activity related to enhanced aftershock production
NASA Astrophysics Data System (ADS)
Marsan, D.; Helmstetter, A.; Bouchon, M.; Dublanchet, P.
2014-10-01
Foreshock activity sometimes precedes the occurrence of large earthquakes, but the nature of this seismicity is still debated, and whether it marks transient deformation and/or slip nucleation is still unclear. We here study at the worldwide scale how foreshock occurrence affects the postseismic phase and find a significant positive correlation between foreshock and aftershock activities: earthquakes preceded by accelerating seismicity rates produce 40% more aftershocks on average, and the length of the aftershock zone after 20 days is 20% larger. These observations cannot be reproduced by standard earthquake clustering models that predict the accelerating pattern of foreshock occurrence but not its impact on aftershock activity. This strongly suggests that slow deformation transients, possibly related to episodic creep, could initiate prior to the main shock and extend past the coseismic phase, resulting in compound ruptures that include a very long period (up to tens of days) component.
Discrete characteristics of the aftershock sequence of the 2011 Van earthquake
NASA Astrophysics Data System (ADS)
Toker, Mustafa
2014-10-01
An intraplate earthquake of magnitude Mw 7.2 occurred on a NE-SW trending blind oblique thrust fault in accretionary orogen, the Van region of Eastern Anatolia on October 23, 2011. The aftershock seismicity in the Van earthquake was not continuous but, rather, highly discrete. This shed light on the chaotic nonuniformity of the event distribution and played key roles in determining the seismic coupling between the rupturing process and seismogeneity. I analyzed the discrete statistical mechanics of the 2011 Van mainshock-aftershock sequence with an estimation of the non-dimensional tuning parameters consisting of; temporal clusters (C) and the random (RN) distribution of aftershocks, range of size scales (ROSS), strength change (εD), temperature (T), P-value of temporal decay, material parameter R-value, seismic coupling χ, and Q-value of aftershock distribution. I also investigated the frequency-size (FS), temporal (T) statistics and the sequential characteristics of aftershock dynamics using discrete approach and examined the discrete evolutionary periods of the Van earthquake Gutenberg-Richter (GR) distribution. My study revealed that the FS and T statistical properties of aftershock sequence represent the Gutenberg-Richter (GR) distribution, clustered (C) in time and random (RN) Poisson distribution, respectively. The overall statistical behavior of the aftershock sequence shows that the Van earthquake originated in a discrete structural framework with high seismic coupling under highly variable faulting conditions. My analyses relate this larger dip-slip event to a discrete seismogenesis with two main components of complex fracturing and branching framework of the ruptured fault and dynamic strengthening and hardening behavior of the earthquake. The results indicate two dynamic cases. The first is associated with aperiodic nature of aftershock distribution, indicating a time-independent Poissonian event. The second is associated with variable slip model
Forecasting large aftershocks within one day after the main shock
Omi, Takahiro; Ogata, Yosihiko; Hirata, Yoshito; Aihara, Kazuyuki
2013-01-01
Forecasting the aftershock probability has been performed by the authorities to mitigate hazards in the disaster area after a main shock. However, despite the fact that most of large aftershocks occur within a day from the main shock, the operational forecasting has been very difficult during this time-period due to incomplete recording of early aftershocks. Here we propose a real-time method for efficiently forecasting the occurrence rates of potential aftershocks using systematically incomplete observations that are available in a few hours after the main shocks. We demonstrate the method's utility by retrospective early forecasting of the aftershock activity of the 2011 Tohoku-Oki Earthquake of M9.0 in Japan. Furthermore, we compare the results by the real-time data with the compiled preliminary data to examine robustness of the present method for the aftershocks of a recent inland earthquake in Japan. PMID:23860594
40 CFR 1065.644 - Vacuum-decay leak rate.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Vacuum-decay leak rate. 1065.644 Section 1065.644 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.644 Vacuum-decay leak...
40 CFR 1065.644 - Vacuum-decay leak rate.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Vacuum-decay leak rate. 1065.644 Section 1065.644 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.644 Vacuum-decay leak...
40 CFR 1065.644 - Vacuum-decay leak rate.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Vacuum-decay leak rate. 1065.644 Section 1065.644 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.644 Vacuum-decay leak...
40 CFR 1065.644 - Vacuum-decay leak rate.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Vacuum-decay leak rate. 1065.644 Section 1065.644 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.644 Vacuum-decay leak...
40 CFR 1065.644 - Vacuum-decay leak rate.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Vacuum-decay leak rate. 1065.644 Section 1065.644 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.644 Vacuum-decay leak...
NASA Astrophysics Data System (ADS)
Yoder, Mark R.; Van Aalsburg, Jordan; Turcotte, Donald L.; Abaimov, Sergey G.; Rundle, John B.
2013-01-01
Aftershock statistics provide a wealth of data that can be used to better understand earthquake physics. Aftershocks satisfy scale-invariant Gutenberg-Richter (GR) frequency-magnitude statistics. They also satisfy Omori's law for power-law seismicity rate decay and Båth's law for maximum-magnitude scaling. The branching aftershock sequence (BASS) model, which is the scale-invariant limit of the epidemic-type aftershock sequence model (ETAS), uses these scaling laws to generate synthetic aftershock sequences. One objective of this paper is to show that the branching process in these models satisfies Tokunaga branching statistics. Tokunaga branching statistics were originally developed for drainage networks and have been subsequently shown to be valid in many other applications associated with complex phenomena. Specifically, these are characteristic of a universality class in statistical physics associated with diffusion-limited aggregation. We first present a deterministic version of the BASS model and show that it satisfies the Tokunaga side-branching statistics. We then show that a fully stochastic BASS simulation gives similar results. We also study foreshock statistics using our BASS simulations. We show that the frequency-magnitude statistics in BASS simulations scale as the exponential of the magnitude difference between the mainshock and the foreshock, inverse GR scaling. We also show that the rate of foreshock occurrence in BASS simulations decays inversely with the time difference between foreshock and mainshock, an inverse Omori scaling. Both inverse scaling laws have been previously introduced empirically to explain observed foreshock statistics. Observations have demonstrated both of these scaling relations to be valid, consistent with our simulations. ETAS simulations, in general, do not generate Båth's law and do not generate inverse GR scaling.
Modern Measurements of Uranium Decay Rates
NASA Astrophysics Data System (ADS)
Parsons-Moss, T.; Faye, S. A.; Williams, R. W.; Wang, T. F.; Renne, P. R.; Mundil, R.; Harrison, M.; Bandong, B. B.; Moody, K.; Knight, K. B.
2015-12-01
It has been widely recognized that accurate and precise decay constants (λ) are critical to geochronology as highlighted by the EARTHTIME initiative, particularly the calibration benchmarks λ235U and λ238U. [1] Alpha counting experiments in 1971[2] measured λ235U and λ238U with ~0.1% precision, but have never been independently validated. We are embarking on new direct measurements of λ235U, λ238U, λ234Th, and λ234U using independent approaches for each nuclide. For the measurement of λ235U, highly enriched 235U samples will be chemically purified and analyzed for U concentration and isotopic composition by multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS). Thin films will be electrodeposited from these solutions and the α activity will be measured in an α-γ coincidence counting apparatus, which allows reduced uncertainty in counting efficiency while achieving adequate counting statistics. For λ238U measurement we will measure ingrowth of 234Th in chemically purified, isotopically enriched 238U solutions, by quantitatively separating the Th and allowing complete decay to 234U. All of the measurements will be done using MC-ICP-MS aiming at 0.05% precision. This approach is expected to result in values of λ238U with less than 0.1% uncertainty, if combined with improved λ234Th measements. These will be achieved using direct decay measurements with an E-∆E charged particle telescope in coincidence with a gamma detector. This system allows measurement of 234Th β-decay and simultaneous detection and identification of α particles emitted by the 234U daughter, thus observing λ234U at the same time. The high-precision λ234U obtained by the direct activity measurements can independently verify the commonly used values obtained by indirect methods.[3] An overarching goal of the project is to ensure the quality of results including metrological traceability in order to facilitate implementation across diverse disciplines. [1] T
Rate of gravitational inflaton decay via gauge trace anomaly
Watanabe, Yuki
2011-02-15
We analyze decay processes of the inflaton field, {phi}, during the coherent oscillation phase after inflation in f({phi})R gravity. It is inevitable that the inflaton decays gravitationally into gauge fields in the presence of f({phi})R coupling. We show a concrete calculation of the rate that the inflaton field decays into a pair of gauge fields via the trace anomaly. Comparing this new decay channel via the anomaly with the channels from the tree-level analysis, we find that the branching ratio crucially depends on masses and the internal multiplicities (flavor quantum number) of decay product particles. While the inflaton decays exclusively into light fields, heavy fields still play a role in quantum loops. We argue that this process in principle allows us to constrain the effects of arbitrary heavy particles in the reheating. We also apply our analysis to Higgs inflation, and find that the gravitational decay rate would never exceed gauge interaction decay rates if quantum gravity is unimportant.
A Fluid-driven Earthquake Cycle, Omori's Law, and Fluid-driven Aftershocks
NASA Astrophysics Data System (ADS)
Miller, S. A.
2015-12-01
Few models exist that predict the Omori's Law of aftershock rate decay, with rate-state friction the only physically-based model. ETAS is a probabilistic model of cascading failures, and is sometimes used to infer rate-state frictional properties. However, the (perhaps dominant) role of fluids in the earthquake process is being increasingly realised, so a fluid-based physical model for Omori's Law should be available. In this talk, I present an hypothesis for a fluid-driven earthquake cycle where dehydration and decarbonization at depth provides continuous sources of buoyant high pressure fluids that must eventually make their way back to the surface. The natural pathway for fluid escape is along plate boundaries, where in the ductile regime high pressure fluids likely play an integral role in episodic tremor and slow slip earthquakes. At shallower levels, high pressure fluids pool at the base of seismogenic zones, with the reservoir expanding in scale through the earthquake cycle. Late in the cycle, these fluids can invade and degrade the strength of the brittle crust and contribute to earthquake nucleation. The mainshock opens permeable networks that provide escape pathways for high pressure fluids and generate aftershocks along these flow paths, while creating new pathways by the aftershocks themselves. Thermally activated precipitation then seals up these pathways, returning the system to a low-permeability environment and effective seal during the subsequent tectonic stress buildup. I find that the multiplicative effect of an exponential dependence of permeability on the effective normal stress coupled with an Arrhenius-type, thermally activated exponential reduction in permeability results in Omori's Law. I simulate this scenario using a very simple model that combines non-linear diffusion and a step-wise increase in permeability when a Mohr Coulomb failure condition is met, and allow permeability to decrease as an exponential function in time. I show very
The decay rates of autoionizing quasi-molecular states
NASA Astrophysics Data System (ADS)
Kishinevsky, L. M.; Krakov, B. G.; Parilis, E. S.
1981-09-01
The decay rates of three quasi-molecular autoionizing states of a HeBe 4+-like system consisting of two electrons and two Coulomb centres are calculated. It is shown that with decreasing internuclear distance the decay rate passes through a maximum which for (2 pσ) 2 states is 1.6 × 10 15s-1. This considerably exceeds the value for the united atom.
Decay rate of the second radiation belt
NASA Astrophysics Data System (ADS)
Badhwar, G. D.; Robbins, D. E.
Variations in the Earth's trapped (Van Allen) belts produced by solar flare particle events are not well understood. Few observations of increases in particle populations have been reported. This is particularly true for effects in low Earth orbit, where manned spaceflights are conducted. This paper reports the existence of a second proton belt and it's subsequent decay as measured by a tissue-equivalent proportional counter and a particle spectrometer on five Space Shuttle flights covering an eighteen-month period. The creation of this second belt is attributed to the injection of particles from a solar particle event which occurred at 2246 UT, March 22, 1991. Comparisons with observations onboard the Russian Mir space station and other unmanned satellites are made. Shuttle measurements and data from other spacecraft are used to determine that the e-folding time of the peak of the second proton belt. It was ten months. Proton populations in the second belt returned to values of quiescent times within eighteen months. The increase in absorbed dose attributed to protons in the second belt was approximately 20%. Passive dosimeter measurements were in good agreement with this value.
Comparison of the non-proliferation event aftershocks with other Nevada Test Site events
Jarpe, S.; Goldstein, P.; Zucca, J.J.
1994-04-01
As part of a larger effort to develop technology for on-site inspection of ambiguous underground seismic events, we have been working to identify phenomenology of aftershock seismicity which would be useful for discriminating between nuclear explosions, chemical explosions, earthquakes or other seismic events. Phenomenology we have investigated includes; the spatial distribution of aftershocks, the number of aftershocks as a function of time after the main event, the size of the aftershocks, and waveform frequency content. Our major conclusions are: (1) Depending on local geologic conditions, aftershock production rate two weeks after zero time ranges from 1 to 100 per day. (2) Aftershocks of concentrated chemical explosions such as the NPE are indistinguishable from aftershocks of nuclear explosions. (3) Earthquake and explosion aftershock sequences may be differentiated on the basis of depth, magnitude, and in some cases, frequency content of seismic signals.
Photoluminescence decay rate of silicon nanoparticles modified with gold nanoislands
2014-01-01
We investigated plasmon-assisted enhancement of emission from silicon nanoparticles (ncs-Si) embedded into porous SiO x matrix in the 500- to 820-nm wavelength range. In the presence in the near-surface region of gold nanoisland film, ncs-Si exhibited up to twofold luminescence enhancement at emission frequencies that correspond to the plasmon resonance frequency of Au nanoparticles. Enhancement of the photoluminescence (PL) intensity was attributed to coupling with the localized surface plasmons (LSPs) excited in Au nanoparticles and to increase in the radiative decay rate of ncs-Si. It has been shown that spontaneous emission decay rate of ncs-Si modified by thin Au film over the wide emission spectral range was accelerated. The emission decay rate distribution was determined by fitting the experimental decay curves to the stretched exponential model. The observed increase of the PL decay rate distribution width for the Au-coated nc-Si-SiO x sample in comparison with the uncoated one was explained by fluctuations in the surface-plasmon excitation rate. PACS 78. 67. Bf; 78.55.-m PMID:24708532
Bayesian Predictive Distribution for the Magnitude of the Largest Aftershock
NASA Astrophysics Data System (ADS)
Shcherbakov, R.
2014-12-01
Aftershock sequences, which follow large earthquakes, last hundreds of days and are characterized by well defined frequency-magnitude and spatio-temporal distributions. The largest aftershocks in a sequence constitute significant hazard and can inflict additional damage to infrastructure. Therefore, the estimation of the magnitude of possible largest aftershocks in a sequence is of high importance. In this work, we propose a statistical model based on Bayesian analysis and extreme value statistics to describe the distribution of magnitudes of the largest aftershocks in a sequence. We derive an analytical expression for a Bayesian predictive distribution function for the magnitude of the largest expected aftershock and compute the corresponding confidence intervals. We assume that the occurrence of aftershocks can be modeled, to a good approximation, by a non-homogeneous Poisson process with a temporal event rate given by the modified Omori law. We also assume that the frequency-magnitude statistics of aftershocks can be approximated by Gutenberg-Richter scaling. We apply our analysis to 19 prominent aftershock sequences, which occurred in the last 30 years, in order to compute the Bayesian predictive distributions and the corresponding confidence intervals. In the analysis, we use the information of the early aftershocks in the sequences (in the first 1, 10, and 30 days after the main shock) to estimate retrospectively the confidence intervals for the magnitude of the expected largest aftershocks. We demonstrate by analysing 19 past sequences that in many cases we are able to constrain the magnitudes of the largest aftershocks. For example, this includes the analysis of the Darfield (Christchurch) aftershock sequence. The proposed analysis can be used for the earthquake hazard assessment and forecasting associated with the occurrence of large aftershocks. The improvement in instrumental data associated with early aftershocks can greatly enhance the analysis and
Energy decay rate of the thermoelastic Bresse system
NASA Astrophysics Data System (ADS)
Liu, Zhuangyi; Rao, Bopeng
2009-01-01
In this paper, we study the energy decay rate for the thermoelastic Bresse system which describes the motion of a linear planar, shearable thermoelastic beam. If the longitudinal motion and heat transfer are neglected, this model reduces to the well-known thermoelastic Timoshenko beam equations. The system consists of three wave equations and two heat equations coupled in certain pattern. The two wave equations about the longitudinal displacement and shear angle displacement are effectively damped by the dissipation from the two heat equations. Actually, the corresponding energy decays exponentially like the classical one-dimensional thermoelastic system. However, the third wave equation about the vertical displacement is only weakly damped. Thus the decay rate of the energy of the overall system is still unknown. We will show that the exponentially decay rate is preserved when the wave speed of the vertical displacement coincides with the wave speed of longitudinal displacement or of the shear angle displacement. Otherwise, only a polynomial type decay rate can be obtained. These results are proved by verifying the frequency domain conditions.
Observations of HF backscatter decay rates from HAARP generated FAI
NASA Astrophysics Data System (ADS)
Bristow, William; Hysell, David
2016-07-01
Suitable experiments at the High-frequency Active Auroral Research Program (HAARP) facilities in Gakona, Alaska, create a region of ionospheric Field-Aligned Irregularities (FAI) that produces strong radar backscatter observed by the SuperDARN radar on Kodiak Island, Alaska. Creation of FAI in HF ionospheric modification experiments has been studied by a number of authors who have developed a rich theoretical background. The decay of the irregularities, however, has not been so widely studied yet it has the potential for providing estimates of the parameters of natural irregularity diffusion, which are difficult measure by other means. Hysell, et al. [1996] demonstrated using the decay of radar scatter above the Sura heating facility to estimate irregularity diffusion. A large database of radar backscatter from HAARP generated FAI has been collected over the years. Experiments often cycled the heater power on and off in a way that allowed estimates of the FAI decay rate. The database has been examined to extract decay time estimates and diffusion rates over a range of ionospheric conditions. This presentation will summarize the database and the estimated diffusion rates, and will discuss the potential for targeted experiments for aeronomy measurements. Hysell, D. L., M. C. Kelley, Y. M. Yampolski, V. S. Beley, A. V. Koloskov, P. V. Ponomarenko, and O. F. Tyrnov, HF radar observations of decaying artificial field aligned irregularities, J. Geophys. Res. , 101, 26,981, 1996.
Radiative decay rates of impurity states in semiconductor nanocrystals
Turkov, Vadim K.; Baranov, Alexander V.; Fedorov, Anatoly V.; Rukhlenko, Ivan D.
2015-10-15
Doped semiconductor nanocrystals is a versatile material base for contemporary photonics and optoelectronics devices. Here, for the first time to the best of our knowledge, we theoretically calculate the radiative decay rates of the lowest-energy states of donor impurity in spherical nanocrystals made of four widely used semiconductors: ZnS, CdSe, Ge, and GaAs. The decay rates were shown to vary significantly with the nanocrystal radius, increasing by almost three orders of magnitude when the radius is reduced from 15 to 5 nm. Our results suggest that spontaneous emission may dominate the decay of impurity states at low temperatures, and should be taken into account in the design of advanced materials and devices based on doped semiconductor nanocrystals.
Statistical estimation of the duration of aftershock sequences
NASA Astrophysics Data System (ADS)
Hainzl, S.; Christophersen, A.; Rhoades, D.; Harte, D.
2016-05-01
It is well known that large earthquakes generally trigger aftershock sequences. However, the duration of those sequences is unclear due to the gradual power-law decay with time. The triggering time is assumed to be infinite in the epidemic type aftershock sequence (ETAS) model, a widely used statistical model to describe clustering phenomena in observed earthquake catalogues. This assumption leads to the constraint that the power-law exponent p of the Omori-Utsu decay has to be larger than one to avoid supercritical conditions with accelerating seismic activity on long timescales. In contrast, seismicity models based on rate- and state-dependent friction observed in laboratory experiments predict p ≤ 1 and a finite triggering time scaling inversely to the tectonic stressing rate. To investigate this conflict, we analyse an ETAS model with finite triggering times, which allow smaller values of p. We use synthetic earthquake sequences to show that the assumption of infinite triggering times can lead to a significant bias in the maximum likelihood estimates of the ETAS parameters. Furthermore, it is shown that the triggering time can be reasonably estimated using real earthquake catalogue data, although the uncertainties are large. The analysis of real earthquake catalogues indicates mainly finite triggering times in the order of 100 days to 10 years with a weak negative correlation to the background rate, in agreement with expectations of the rate- and state-friction model. The triggering time is not the same as the apparent duration, which is the time period in which aftershocks dominate the seismicity. The apparent duration is shown to be strongly dependent on the mainshock magnitude and the level of background activity. It can be much shorter than the triggering time. Finally, we perform forward simulations to estimate the effective forecasting period, which is the time period following a mainshock, in which ETAS simulations can improve rate estimates after the
Influences of the astrophysical environment on nuclear decay rates
Norman, E.B.
1987-09-01
In many astronomical environments, physical conditions are so extreme that nuclear decay rates can be significantly altered from their laboratory values. Such effects are relevant to a number of current problems in nuclear astrophysics. Experiments related to these problems are now being pursued, and will be described in this talk. 19 refs., 5 figs.
Uncertainties in Astrophysical β-decay Rates from the FRDM
Bertolli, M.G.; Möller, P.; Jones, S.
2014-06-15
β{sup −}-decay rates are of crucial importance in stellar evolution and nucleosynthesis, as they are a key component in stellar processes. Tabulated values of the decay rates as functions of both temperature T and density ρ are necessary input to stellar evolution codes such as MESA, or largescale nucleosynthesis simulations such as those performed by the NuGrid collaboration. Therefore, it is interesting to know the uncertainties in these rates and the effects of these uncertainties on stellar structure and isotopic yields. We have calculated β-strength functions and reaction rates for nuclei ranging from {sup 16}O to {sup 339}136, extending from the proton drip line to the neutron drip line based on a quasi-particle random-phase approximation (QRPA) in a deformed folded-Yukawa single-particle model. Q values are determined from the finite-range droplet mass model (FRDM). We have investigated the effect of model uncertainty on astrophysical β{sup −}-decay rates calculated by the FRDM. The sources of uncertainty considered are Q values and deformation. The rates and their uncertainties are generated for a variety of temperature and density ranges, corresponding to key stellar processes. We demonstrate the effects of these rate uncertainties on isotopic abundances using the NuGrid network calculations.
Materials Outgassing Rate Decay in Vacuum at Isothermal Conditions
NASA Technical Reports Server (NTRS)
Huang, Alvin Y.; Kastanas, George N.; Kramer, Leonard; Soares, Carlos E.; Mikatarian, Ronald R.
2016-01-01
As a laboratory for scientific research, the International Space Station has been in Low Earth Orbit for nearly 20 years and is expected to be on-orbit for another 10 years. The ISS has been maintaining a relatively pristine contamination environment for science payloads. Materials outgassing induced contamination is currently the dominant source for sensitive surfaces on ISS and modeling the outgassing rate decay over a 20 to 30 year period is challenging. Materials outgassing is described herein as a diffusion-reaction process using ASTM E 1559 rate data. The observation of -1/2 (diffusion) or non-integers (reaction limited) as rate decay exponents for common ISS materials indicate classical reaction kinetics is unsatisfactory in modeling materials outgassing. Non-randomness of reactant concentrations at the interface is the source of this deviation from classical reaction kinetics. A diffusion limited decay was adopted as the result of the correlation of the contaminant layer thicknesses on returned ISS hardware, the existence of high outgassing silicone exhibiting near diffusion limited decay, and the confirmation of non-depleted material after ten years in the Low Earth Orbit.Keywords: Materials Outgassing, ASTM E 1559, Reaction Kinetics, Diffusion, Space Environments Effects, Contamination
Hurricane Irene's Impacts on the Aftershock Sequence of the 2011 Mw5.8 Virginia Earthquake
NASA Astrophysics Data System (ADS)
Meng, X.; Peng, Z.; Yang, H.; Allman, S.
2013-12-01
Recent studies have shown that typhoon could trigger shallow slow-slip events in Taiwan. However, it is unclear whether such extreme weather events could affect the occurrence of regular earthquakes as well. A good opportunity to test this hypothesis occurred in 2011 when an Mw 5.8 earthquake struck Louisa County, Virginia. This event ruptured a shallow, reverse fault. Roughly 5 days later, hurricane Irene struck the coast of Norfolk, Virginia, which is near the epicentral region of the Virginia mainshock. Because aftershocks listed in the ANSS catalog were incomplete immediately after the main shock, it is very difficult to find the genuine correlation between the seismicity rate changes and hurricane Irene. Hence, we use a recently developed waveform matched filter technique to scan through the continuous seismic data to detect small aftershocks that are previously unidentified. A mixture of 7 temporary stations from the IRIS Ramp deployment and 8 temporary stations deployed by Virginia Tech is used. The temporary stations were set up between 24 to 72 hours following the main shock around its immediate vicinity, which provides us a unique dataset recording the majority aftershock sequence of an intraplate earthquake. We us 80 aftershocks identified by Chapman [2013] as template events and scan through the continuous data from 23 August 2011 through 10 September 2011. So far, we have detected 704 events using a threshold of 12 times the median absolute deviation (MAD), which is ~25 times more than listed in the ANSS catalog. The aftershock rate generally decayed with time as predicted by the Omori's law. A statistically significant increase of seismicity rate is found when hurricane Irene passed by the epicentral region. A possible explanation is that the atmosphere pressure drop unloaded the surface, which brought the reverse faults closer to failure. However, we also identified similar fluctuations of seismicity rate changes at other times. Hence, it is still
Fine and ultrafine particle decay rates in multiple homes.
Wallace, Lance; Kindzierski, Warren; Kearney, Jill; MacNeill, Morgan; Héroux, Marie-Ève; Wheeler, Amanda J
2013-11-19
Human exposure to particles depends on particle loss mechanisms such as deposition and filtration. Fine and ultrafine particles (FP and UFP) were measured continuously over seven consecutive days during summer and winter inside 74 homes in Edmonton, Canada. Daily average air exchange rates were also measured. FP were also measured outside each home and both FP and UFP were measured at a central monitoring station. A censoring algorithm was developed to identify indoor-generated concentrations, with the remainder representing particles infiltrating from outdoors. The resulting infiltration factors were employed to determine the continuously changing background of outdoor particles infiltrating the homes. Background-corrected indoor concentrations were then used to determine rates of removal of FP and UFP following peaks due to indoor sources. About 300 FP peaks and 400 UFP peaks had high-quality (median R(2) value >98%) exponential decay rates lasting from 30 min to 10 h. Median (interquartile range (IQR)) decay rates for UFP were 1.26 (0.82-1.83) h(-1); for FP 1.08 (0.62-1.75) h(-1). These total decay rates included, on average, about a 25% contribution from air exchange, suggesting that deposition and filtration accounted for the major portion of particle loss mechanisms in these homes. Models presented here identify and quantify effects of several factors on total decay rates, such as window opening behavior, home age, use of central furnace fans and kitchen and bathroom exhaust fans, use of air cleaners, use of air conditioners, and indoor-outdoor temperature differences. These findings will help identify ways to reduce exposure and risk. PMID:24143863
NASA Astrophysics Data System (ADS)
Ganas, Athanassios; Karastathis, Vassilios; Moshou, Alexandra; Valkaniotis, Sotirios; Mouzakiotis, Evangelos; Papathanassiou, George
2014-03-01
On August 7, 2013 a moderate earthquake (NOA ML = 5.1, NOA Mw = 5.4) occurred in central Kallidromon Mountain, in the Pthiotis region of central Greece. 2270 aftershocks were relocated using a modified 1-D velocity model for this area. The b-value of the aftershock sequence was b = 0.85 for a completeness magnitude of Mc = 1.7. The rate of aftershock decay was determined at p = 0.63. The spatial distribution of the aftershock sequence points towards the reactivation of a N70° ± 10°E striking normal fault at crustal depths between 8 and 13 km. A NNW-SSE cross-section imaged the activation of a steep, south dipping normal fault. A stress inversion analysis of 12 focal mechanisms showed that the minimum horizontal stress is extensional at N173°E. No primary surface ruptures were observed in the field; however, the earthquake caused severe damage in the villages of the Kallidromon area. The imaged fault strike and the orientation of the long-axis of the aftershock sequence distribution are both at a high-angle to the strike of known active faults in this area of central Greece. We interpret the Kallidromon seismic sequence as release of extensional seismic strain on secondary, steep faults inside the Fokida-Viotia crustal block.
Reduced Aftershock Productivity in Regions with Known Slow Slip Events
NASA Astrophysics Data System (ADS)
Collins, G.; Mina, A.; Richardson, E.; McGuire, J. J.
2013-12-01
Reduced aftershock activity has been observed in areas with high rates of aseismic slip, such as transform fault zones and some subduction zones. Fault conditions that could explain both of these observations include a low effective normal stress regime and/or a high temperature, semi-brittle/plastic rheology. To further investigate the possible connection between areas of aseismic slip and reduced aftershock productivity, we compared the mainshock-aftershock sequences in subduction zones where aseismic slip transients have been observed to those of adjacent (along-strike) regions where no slow slip events have been detected. Using the Advanced National Seismic System (ANSS) catalog, we counted aftershocks that occurred within 100 km and 14 days of 112 M>=5.0 slab earthquake mainshocks from January 1980 - July 2013, including 90 since January 2000, inside observed regions of detected slow slip: south central Alaska, Cascadia, the Nicoya Peninsula (Costa Rica), Guerrero (Mexico), and the North Island of New Zealand. We also compiled aftershock counts from 97 mainshocks from areas adjacent to each of these regions using the same criteria and over the same time interval. Preliminary analysis of these two datasets shows an aftershock triggering exponent (alpha in the ETAS model) of approximately 0.8, consistent with previous studies of aftershocks in a variety of tectonic settings. Aftershock productivity for both datasets is less than that of continental earthquakes. Contrasting the two datasets, aftershock productivity inside slow slip regions is lower than in adjacent areas along the same subduction zone and is comparable to that of mid-ocean ridge transform faults.
NASA Astrophysics Data System (ADS)
Peng, Z.; Meng, X.; Hong, B.; Yu, X.
2012-12-01
Large shallow earthquakes are generally followed by increased seismic activities around the mainshock rupture zone, known as "aftershocks". Whether static or dynamic triggering is responsible for triggering aftershocks is still in debate. However, aftershocks listed in standard earthquake catalogs are generally incomplete immediately after the mainshock, which may result in inaccurate estimation of seismic rate changes. Recent studies have used waveforms of existing earthquakes as templates to scan through continuous waveforms to detect potential missing aftershocks, which is termed 'matched filter technique'. However, this kind of data mining is computationally intensive, which raises new challenges when applying to large data sets with tens of thousands of templates, hundreds of seismic stations and years of continuous waveforms. The waveform matched filter technique exhibits parallelism at multiple levels, which allows us to use GPU-based computation to achieve significant acceleration. By dividing the procedure into several routines and processing them in parallel, we have achieved ~40 times speedup for one Nvidia GPU card compared to sequential CPU code, and further scaled the code to multiple GPUs. We apply this paralleled code to detect potential missing aftershocks around the 2003 Mw 6.5 San Simeon and 2004 Mw6.0 Parkfield earthquakes in Central California, and around the 2010 Mw 7.2 El Mayor-Cucapah earthquake in southern California. In all these cases, we can detect several tens of times more earthquakes immediately after the mainshocks as compared with those listed in the catalogs. These newly identified earthquakes are revealing new information about the physical mechanisms responsible for triggering aftershocks in the near field. We plan to improve our code so that it can be executed in large-scale GPU clusters. Our work has the long-term goal of developing scalable methods for seismic data analysis in the context of "Big Data" challenges.
NASA Astrophysics Data System (ADS)
Papadimitriou, Eleftheria; Gospodinov, Dragomir; Karakostas, Vassilis; Astiopoulos, Anastasios
2013-04-01
A multiplet of moderate-magnitude earthquakes (5.1 ≤ M ≤ 5.6) took place in Zakynthos Island and offshore area (central Ionian Islands, Greece) in April 2006. The activity in the first month occupied an area of almost 35 km long, striking roughly NNW-SSE, whereas aftershocks continued for several months, decaying with time but persisting at the same place. The properties of the activated structure were investigated with accurate relocated data and the available fault plane solutions of some of the stronger events. Both the distribution of seismicity and fault plane solutions show that thrusting with strike-slip motions are both present in high-angle fault segments. The segmentation of the activated structure could be attributed to the faulting complexity inherited from the regional compressive tectonics. Investigation of the spatial and temporal behavior of seismicity revealed possible triggering of adjacent fault segments that may fail individually, thus preventing coalescence in a large main rupture. In an attempt to forecast occurrence probabilities of six of the strong events ( M w ≥ 5.0), estimations were performed following the restricted epidemic-type aftershock sequence model, applied to data samples before each one of these strong events. Stochastic modeling was also used to identify "quiescence" periods before the examined aftershocks. In two out of the six cases, real aftershock rate did decrease before the next strong shock compared to the modeled one. The latter results reveal that rate decrease is not a clear precursor of strong shocks in the swarm and no quantitative information, suitable to supply probability gain, could be extracted from the data.
NASA Astrophysics Data System (ADS)
Page, M. T.; Hardebeck, J.; Felzer, K. R.; Michael, A. J.; van der Elst, N.
2015-12-01
Following a large earthquake, seismic hazard can be orders of magnitude higher than the long-term average as a result of aftershock triggering. Due to this heightened hazard, there is a demand from emergency managers and the public for rapid, authoritative, and reliable aftershock forecasts. In the past, USGS aftershock forecasts following large, global earthquakes have been released on an ad-hoc basis with inconsistent methods, and in some cases, aftershock parameters adapted from California. To remedy this, we are currently developing an automated aftershock product that will generate more accurate forecasts based on the Reasenberg and Jones (Science, 1989) method. To better capture spatial variations in aftershock productivity and decay, we estimate regional aftershock parameters for sequences within the Garcia et al. (BSSA, 2012) tectonic regions. We find that regional variations for mean aftershock productivity exceed a factor of 10. The Reasenberg and Jones method combines modified-Omori aftershock decay, Utsu productivity scaling, and the Gutenberg-Richter magnitude distribution. We additionally account for a time-dependent magnitude of completeness following large events in the catalog. We generalize the Helmstetter et al. (2005) equation for short-term aftershock incompleteness and solve for incompleteness levels in the global NEIC catalog following large mainshocks. In addition to estimating average sequence parameters within regions, we quantify the inter-sequence parameter variability. This allows for a more complete quantification of the forecast uncertainties and Bayesian updating of the forecast as sequence-specific information becomes available.
Isolating the Decay Rate of Cosmological Gravitational Potential
NASA Astrophysics Data System (ADS)
Zhang, Pengjie
2006-08-01
The decay rate of cosmological gravitational potential measures the deviation from Einstein-de Sitter universe and can put strong constraints on the nature of dark energy and gravity. The usual method to measure this decay rate is through the integrated Sachs-Wolfe (ISW) effect-large-scale structure (LSS) cross-correlation. However, the interpretation of the measured correlation signal is complicated by galaxy bias and the matter power spectrum. This could bias and/or degrade its constraints on the nature of dark energy and gravity. But combining the lensing-LSS cross-correlation measurements, the decay rate of gravitational potential can be isolated. For any given narrow redshift bin of LSS, the ratio of the two cross-correlations directly measures (dlnDφ/dlna)H(z)/W(χ, χs), where Dφ is the linear growth factor of the gravitational potential, H is the Hubble constant at redshift z, W(χ, χs) is the lensing kernel, and χ and χs are the comoving angular diameter distance to lens and source, respectively. This method is optimal in the sense that (1) the measured quantity is essentially free of systematic errors and is only limited by cosmic variance, and (2) the measured quantity depends only on several cosmological parameters and can be predicted from first principles unambiguously. Although fundamentally limited by the inevitably large cosmic variance associated with the ISW measurements, it can still put useful independent constraints on the amount of dark energy and its equation of state. It can also provide a powerful test of modified gravity and can distinguish the Dvali-Gabadadze-Porrati model from ΛCDM at >2.5 σ confidence level.
Solvent Polarity Effect on Nonradiative Decay Rate of Thioflavin T.
Stsiapura, Vitali I; Kurhuzenkau, Siarhei A; Kuzmitsky, Valery A; Bouganov, Oleg V; Tikhomirov, Sergey A
2016-07-21
It has been established earlier that fluorescence quantum yield of thioflavin T (ThT)-a probe widely used for amyloid fibrils detection-is viscosity-dependent, and photophysical properties of ThT can be well-described by the fluorescent molecular rotor model, which associates twisted internal charge transfer (TICT) reaction with the main nonradiative decay process in the excited state of the dye. Solutions of ThT in a range of polar solvents were studied using steady-state fluorescence and sub-picosecond transient absorption spectroscopy methods, and we showed that solvent effect on nonradiative transition rate knr cannot be reduced to the dependence on viscosity only and that ∼3 times change of knr can be observed for ThT in aprotic solvents and water, which correlates with solvent polarity. Different behavior was observed in alcohol solutions, particularly in longer n-alcohols, where TICT rate was mainly determined by rotational diffusion of ThT fragments. Quantum-chemical calculations of S0 → S1 transition energy were performed to get insight of polar solvent contribution to the excited-state energy stabilization. Effect of polar solvent on electronic energy levels of ThT was simulated by applying homogeneous electric field according to the Onsager cavity model. Static solvent effect on the excited-state potential energy surface, where charge transfer reaction takes place, was not essential to account for experimentally observed TICT rate differences in water and aprotic solvents. From the other side, nonradiative decay rate of ThT in water, ethylene glycol, and aprotic solvents was found to follow dynamics of polar solvation knr ∼ τS(-1), which can explain dependence of the TICT rate on both polarity and viscosity of the solvents. PMID:27351358
NASA Astrophysics Data System (ADS)
Mandal, Prantik; Narsaiah, R.; Sairam, B.; Satyamurty, C.; Raju, I. P.
2006-08-01
We employed layered model joint hypocentral determination (JHD) with station corrections to improve location identification for the 26 January, 2001 Mw 7.7 Bhuj early and late aftershock sequence. We relocated 999 early aftershocks using the data from a close combined network (National Geophysical Research Institute, India and Center for Earthquake Research Institute, USA) of 8 18 digital seismographs during 12 28 February, 2001. Additionally, 350 late aftershocks were also relocated using the data from 4 10 digital seismographs/accelerographs during August 2002 to December 2004. These precisely relocated aftershocks (error in the epicentral location<30 meter, error in the focal depth estimation < 50 meter) delineate an east-west trending blind thrust (North Wagad Fault, NWF) dipping (~ 45°) southward, about 25 km north of Kachchh main land fault (KMF), as the causative fault for the 2001 Bhuj earthquake. The aftershock zone is confined to a 60-km long and 40-km wide region lying between the KMF to the south and NWF to the north, extending from 2 to 45 km depth. Estimated focal depths suggest that the aftershock zone became deeper with the passage of time. The P- and S-wave station corrections determined from the JHD technique indicate that the larger values (both +ve and -ve) characterize the central aftershock zone, which is surrounded by the zones of smaller values. The station corrections vary from -0.9 to +1.1 sec for the P waves and from -0.7 to +1.4 sec for the S waves. The b-value and p-value of the whole aftershock (2001 2004) sequences of Mw ≥ 3 are estimated to be 0.77 ± 0.02 and 0.99 ± 0.02, respectively. The p-value indicates a smaller value than the global median of 1.1, suggesting a relatively slow decay of aftershocks, whereas, the relatively lower b-value (less than the average b-value of 1.0 for stable continental region earthquakes of India) suggests a relatively higher probability for larger earthquakes in Kachchh in comparison to other
On the adaptive daily forecasting of seismic aftershock hazard
NASA Astrophysics Data System (ADS)
Ebrahimian, Hossein; Jalayer, Fatemeh; Asprone, Domenico; Lombardi, Anna Maria; Marzocchi, Warner; Prota, Andrea; Manfredi, Gaetano
2013-04-01
Post-earthquake ground motion hazard assessment is a fundamental initial step towards time-dependent seismic risk assessment for buildings in a post main-shock environment. Therefore, operative forecasting of seismic aftershock hazard forms a viable support basis for decision-making regarding search and rescue, inspection, repair, and re-occupation in a post main-shock environment. Arguably, an adaptive procedure for integrating the aftershock occurrence rate together with suitable ground motion prediction relations is key to Probabilistic Seismic Aftershock Hazard Assessment (PSAHA). In the short-term, the seismic hazard may vary significantly (Jordan et al., 2011), particularly after the occurrence of a high magnitude earthquake. Hence, PSAHA requires a reliable model that is able to track the time evolution of the earthquake occurrence rates together with suitable ground motion prediction relations. This work focuses on providing adaptive daily forecasts of the mean daily rate of exceeding various spectral acceleration values (the aftershock hazard). Two well-established earthquake occurrence models suitable for daily seismicity forecasts associated with the evolution of an aftershock sequence, namely, the modified Omori's aftershock model and the Epidemic Type Aftershock Sequence (ETAS) are adopted. The parameters of the modified Omori model are updated on a daily basis using Bayesian updating and based on the data provided by the ongoing aftershock sequence based on the methodology originally proposed by Jalayer et al. (2011). The Bayesian updating is used also to provide sequence-based parameter estimates for a given ground motion prediction model, i.e. the aftershock events in an ongoing sequence are exploited in order to update in an adaptive manner the parameters of an existing ground motion prediction model. As a numerical example, the mean daily rates of exceeding specific spectral acceleration values are estimated adaptively for the L'Aquila 2009
Aftershock process of Chu earthquake
NASA Astrophysics Data System (ADS)
Emanov, Alexey; Leskova, Ekaterina; Emanov, Aleksandr; Kolesnikov, Yury; Fateyev, Aleksandr
2010-05-01
Chu earthquake of 27.09.2003, Ms =7.3 occurred in joint zone of Chagan-Uzun raised block with North-Chu ridge. Epicentral zone cover a series of contrast geological structures of Mountain Altai (two hollows: Chu and Kurai, devided by Chagan-Uzun block, and mountain range, franking them,: Nort-Chu, Kurai, South-Chu, Aigulak). The seismic process occurred in zone of expressive block structure, and this is embodied in its space-time structure. The high accuracy of hypocental construction in epicenral zone of Chu earthquake is provided by local network of seismological stations (fifteen stations) and experiments with temporary station network in this zone (20-50 stations). The first stage of aftershock process formation is connected with Chagan-Uzun block. The second large aftershock of 01.10.2003 changes cardinally spatial pattern of aftershock process. Instead of round area an elongate aftershock area is formed along boundary of Kurai hollow with North-Chu ridge. In the following process spread out in north-west angle of Chu hollow. Linear elongate aftershock area is subdivided into four elements. The north-west element has form of horse tail, starting as a line in area of outlet of Aktru River in Kurai hollow, and ramifies short of settlement Chibit. Slope of plane of aftershocks for this element is determined from hollow under North-Chu ridge. The seismic process is going not along boundary hollow-mountain ridge, but displaced in hollow side. The central part of element - this are mainly horizontal shift faults, and outlying districts have pronounced vertical components of displacements. The second element stretches from Aktru River to Chagan-Uzun block. Earthquake epicenters in plane make two curved parallel lines. In the angle of Chagan-Uzun block are ceiling amount of uplifts. The third element is the boundary of Chagan-Uzun block with North-Chu ridge. The forth element is formed by aftershocks, leaving in range of Chu hollow. Areal dispersal of earthquakes is
Decay Rate for Travelling Waves of a Relaxation Model
NASA Astrophysics Data System (ADS)
Liu, Hailiang; Woo, Ching Wah; Yang, Tong
1997-03-01
A relaxation model was proposed in [Shi Jin and Zhouping Xin,Comm. Pure Appl. Math.48(1995), 555-563] to approximate the hyperbolic systems numerically under the subcharacteristic condition introduced in [T. P. Liu,Comm. Math. Phys.108(1987), 153-175]. The stability of travelling waves with strong shock profile and integral zero was proved in [H. L. Liu, J. H. Wang, and T. Yang, Stability in a relaxation model with nonconvex flux, preprint, 1996; H. L. Liu and J. Wang, Asymptotic stability of travelling wave solutions of a hyperbolic system with relaxation terms, preprint, 1995] when the original system is scalar. In this paper, we study the rate of the asymptotic convergence speed of thse travelling wave solutions. The analysis applies to the case of a nonconvex flux and when the shock speed coincides with characteristic speed of the state at infinity. The decay rate is obtained by applying the energy method and is shown to be the same as the one for the viscous conservation law [A. Matsumura and K. Nishihara,Comm. Math. Phys.165(1994), 83-96].
Aftershocks in coherent-noise models
NASA Astrophysics Data System (ADS)
Wilke, C.; Altmeyer, S.; Martinetz, T.
1998-09-01
The decay pattern of aftershocks in the so-called ‘coherent-noise’ models [M.E.J. Newman, K. Sneppen, Phys. Rev. E 54 (1996) 6226] is studied in detail. Analytical and numerical results show that the probability to find a large event at time t after an initial major event decreases as t- τ for small t, with the exponent τ ranging from 0 to values well above 1. This is in contrast to Sneppen and Newman, who stated that the exponent is about 1, independent of the microscopic details of the simulation. Numerical simulations of an extended model [C. Wilke, T. Martinetz, Phys. Rev. E 56 (1997) 7128] show that the power-law is only a generic feature of the original dynamics and does not necessarily appear in a more general context. Finally, the implications of the results to the modelling of earthquakes are discussed.
Aftershocks and triggered events of the Great 1906 California earthquake
Meltzner, A.J.; Wald, D.J.
2003-01-01
and an M ???5.0 event under or near Santa Monica Bay, 11.3 and 31.3 hr after the San Francisco mainshock, respectively. The western Arizona event is inferred to have been triggered dynamically. In general, the largest aftershocks occurred at the ends of the 1906 rupture or away from the rupture entirely; very few significant aftershocks occurred along the mainshock rupture itself. The total number of large aftershocks was less than predicted by a generic model based on typical California mainshock-aftershock statistics, and the 1906 sequence appears to have decayed more slowly than average California sequences. Similarities can be drawn between the 1906 aftershock sequence and that of the 1857 (Mw 7.9) San Andreas fault earthquake.
Time decay rates for the equations of the compressible heat-conductive flow through porous media
NASA Astrophysics Data System (ADS)
Chen, Qing; Tan, Zhong; Wu, Guochun
2015-11-01
We consider the time decay rates of smooth solutions to the Cauchy problem for the equations of the compressible heat-conductive flow through porous media. We prove the global existence and uniqueness of the solutions by the standard energy method. Moreover, we establish the optimal decay rates of the solution as well as its higher-order spatial derivatives. And the damping effect on the time decay rates of the solution is studied in detail.
NASA Astrophysics Data System (ADS)
Elst, Nicholas J.; Shaw, Bruce E.
2015-07-01
Aftershocks may be driven by stress concentrations left by the main shock rupture or by elastic stress transfer to adjacent fault sections or strands. Aftershocks that occur within the initial rupture may be limited in size, because the scale of the stress concentrations should be smaller than the primary rupture itself. On the other hand, aftershocks that occur on adjacent fault segments outside the primary rupture may have no such size limitation. Here we use high-precision double-difference relocated earthquake catalogs to demonstrate that larger aftershocks occur farther away than smaller aftershocks, when measured from the centroid of early aftershock activity—a proxy for the initial rupture. Aftershocks as large as or larger than the initiating event nucleate almost exclusively in the outer regions of the aftershock zone. This observation is interpreted as a signature of elastic rebound in the earthquake catalog and can be used to improve forecasting of large aftershocks.
Effect of room air recirculation delay on the decay rate of tracer gas concentration
Kristoffersen, A.R.; Gadgil, A.J.; Lorenzetti, D.M.
2004-05-01
Tracer gas measurements are commonly used to estimate the fresh air exchange rate in a room or building. Published tracer decay methods account for fresh air supply, infiltration, and leaks in ductwork. However, the time delay associated with a ventilation system recirculating tracer back to the room also affects the decay rate. We present an analytical study of tracer gas decay in a well-mixed, mechanically-ventilated room with recirculation. The analysis shows that failing to account for delays can lead to under- or over-estimates of the fresh air supply, depending on whether the decay rate calculation includes the duct volume.
Forecasting Aftershocks from Multiple Earthquakes: Lessons from the Mw=7.3 2015 Nepal Earthquake
NASA Astrophysics Data System (ADS)
Jiménez, Abigail; NicBhloscaidh, Mairéad; McCloskey, John
2016-04-01
The Omori decay of aftershocks is often perturbed by large secondary events which present particular, but not uncommon, challenges to aftershock forecasting. The Mw = 7.8, 25 April 2015, Gorkha, Nepal earthquake was followed on 12 May by the Mw = 7.3 Kodari earthquake, superimposed its own aftershocks on the Gorkha sequence, immediately invalidating forecasts made by single-mainshock forecasting methods. The complexity of the Gorkha rupture process, where the hypocentre and moment centroid were separated by some 75 km, provided an insurmountable challenge for other standard forecasting methods. Here, we report several modifications of existing algorithms, which were developed in response to the complexity of this sequence and which appear to provide a more general framework for the robust and dependable forecasting of aftershock probabilities. We suggest that these methods may be operationalised to provide a scientific underpinning for an evidence-based management system for post-earthquake crises.
USE OF GEOSTATISTICS TO PREDICT VIRUS DECAY RATES FOR DETERMINATION OF SEPTIC TANK SETBACK DISTANCES
Water samples were collected from 71 public drinking-water supply wells in the Tucson, Arizona, basin. Virus decay rates in the water samples were determined with MS-2 coliphage as a model virus. The correlations between the virus decay rates and the sample locations were shown b...
Performance of aftershock forecasts: problem and formulation
NASA Astrophysics Data System (ADS)
Jiang, C.; Wu, Z.; Li, L.
2010-12-01
WFSD project deals with the problems of earthquake physics, in which one of the important designed aims is the forecast of the on-going aftershock activity of the Wenchuan earthquake, taking the advantage of the fast response to great earthquakes. Correlation between fluid measurements and aftershocks provided heuristic clues to the forecast of aftershocks, invoking the discussion on the performance of such ‘precursory anomalies’, even if in a retrospective perspective. In statistical seismology, one of the critical issues is how to test the statistical significance of an earthquake forecast scheme against real seismic activity. Due to the special characteristics of aftershock series and the feature of aftershock forecasts that it deals with a limited spatial range and specific temporal duration, the test of the performance of aftershock forecasts has to be different from the standard tests for main shock series. In this presentation we address and discuss the possible schemes for testing the performance of aftershock forecasts - a seemingly simple but practically important issue in statistical seismology. As a simple and preliminary approach, we use an alternative form of Receiver Operating Characteristic (ROC) test, as well as other similar tests, considering the properties of aftershock series by using Omori law, ETAS model, and/or CFS calculation. We also discussed the lessons and experiences of the Wenchuan aftershock forecasts, exploring how to make full use of the present knowledge of the regularity of aftershocks to serve the earthquake rescue and relief endeavor as well as the post-earthquake reconstruction.
NASA Astrophysics Data System (ADS)
Myers, Stephen C.; Wallace, Terry C.; Beck, Susan L.; Silver, Paul G.; Zandt, George; Vandecar, John; Minaya, Estela
On June 9, 1994 the Mw 8.3 Bolivia earthquake (636 km depth) occurred in a region which had not experienced significant, deep seismicity for at least 30 years. The mainshock and aftershocks were recorded in Bolivia on the BANJO and SEDA broadband seismic arrays and on the San Calixto Network. We used the joint hypocenter determination method to determine the relative location of the aftershocks. We have identified no foreshocks and 89 aftershocks (m > 2.2) for the 20-day period following the mainshock. The frequency of aftershock occurrence decreased rapidly, with only one or two aftershocks per day occuring after day two. The temporal decay of aftershock activity is similar to shallow aftershock sequences, but the number of aftershocks is two orders of magnitude less. Additionally, a mb ∼6, apparently triggered earthquake occurred just 10 minutes after the mainshock about 330 km east-southeast of the mainshock at a depth of 671 km. The aftershock sequence occurred north and east of the mainshock and extends to a depth of 665 km. The aftershocks define a slab striking N68°W and dipping 45°NE. The strike, dip, and location of the aftershock zone are consistent with this seismicity being confined within the downward extension of the subducted Nazca plate. The location and orientation of the aftershock sequence indicate that the subducted Nazca plate bends between the NNW striking zone of deep seismicity in western Brazil and the N-S striking zone of seismicity in central Bolivia. A tear in the deep slab is not necessitated by the data. A subset of the aftershock hypocenters cluster along a subhorizontal plane near the depth of the mainshock, favoring a horizontal fault plane. The horizontal dimensions of the mainshock [Beck et al., this issue; Silver et al., 1995] and slab defined by the aftershocks are approximately equal, indicating that the mainshock ruptured through the slab.
Matched-filter Detection of the Missing Foreshocks and Aftershocks of the 2015 Gorkha earthquake
NASA Astrophysics Data System (ADS)
Meng, L.; Huang, H.; Wang, Y.; Plasencia Linares, M. P.
2015-12-01
The 25 April 2015 Mw 7.8 Gorkha earthquake occurred at the bottom edge of the locking portion of the Main Himalayan Thrust (MHT), where the Indian plate under-thrusts the Himalayan wedge. The earthquake is followed by a number of large aftershocks but is not preceded by any foreshocks within ~3 weeks according to the NEIC, ISC and NSC catalog. However, a large portion of aftershocks could be missed due to either the contamination of the mainshock coda or small signal to noise ratio. It is also unclear whether there are foreshocks preceding the mainshock, the underlying physical processes of which are crucial for imminent seismic hazard assessment. Here, we employ the matched filter technique to recover the missing events from 22 April to 30 April. We collect 3-component broadband seismic waveforms recorded by one station in Nepal operated by Ev-K2-CNR, OGS Italy and eleven stations in Tibet operated by the China Earthquake Networks Center. We bandpass the seismograms to 1-6 Hz to retain high frequency energies. The template waveforms with high signal-to-noise ratios (> 5) are obtained at several closest stations. To detect and locate the events that occur around the templates, correlograms are shifted at each station with differential travel time as a function of source location based on the CRUST1.0 model. We find ~14 times more events than those listed in the ISC catalog. Some of the detected events are confirmed by visual inspections of the waveforms at the closest stations. The preliminary results show a streak of seismicity occurred around 2.5 days before the mainshock to the southeast of the mainshock hypocenter. The seismicity rate is elevated above the background level during this period of time and decayed subsequently following the Omori's law. The foreshocks appear to migrate towards the hypocenter with logarithmic time ahead of the mainshock, which indicates possible triggering of the mainshock by the propagating afterslip of the foreshocks. Immediately
Reduced Beta Decay Rates of Iron Isotopes for Supernova Physics
Nabi, Jameel-Un
2009-07-07
During the late phases of stellar evolution beta decay on iron isotopes, in the core of massive stars, plays a crucial role in the dynamics of core-collapse. The beta decay contributes in maintaining a 'respectable' lepton-to-baryon ratio (PSI{sub e}) of the core prior to collapse which results in a larger shock energy to power the explosion. It is indeed a fine tuning of the parameter PSI{sub e} at various stages of supernova physics which can lead to a successful transformation of the collapse into an explosion. The calculations presented here might help in fine-tuning of PSI{sub e} for the collapse simulators of massive stars.
Aftershock triggering by complete Coulomb stress changes
Kilb, Debi; Gomberg, J.; Bodin, P.
2002-01-01
We examine the correlation between seismicity rate change following the 1992, M7.3, Landers, California, earthquake and characteristics of the complete Coulomb failure stress (CFS) changes (??CFS(t)) that this earthquake generated. At close distances the time-varying "dynamic" portion of the stress change depends on how the rupture develops temporally and spatially and arises from radiated seismic waves and from permanent coseismic fault displacement. The permanent "static" portion (??CFS) depends only on the final coseismic displacement. ??CFS diminishes much more rapidly with distance than the transient, dynamic stress changes. A common interpretation of the strong correlation between ??CFS and aftershocks is that load changes can advance or delay failure. Stress changes may also promote failure by physically altering properties of the fault or its environs. Because it is transient, ??CFS(t) can alter the failure rate only by the latter means. We calculate both ??CFS and the maximum positive value of ??CFS(t) (peak ??CFS(t)) using a reflectivity program. Input parameters are constrained by modeling Landers displacement seismograms. We quantify the correlation between maps of seismicity rate changes and maps of modeled ??CFS and peak ??CFS(t) and find agreement for both models. However, rupture directivity, which does not affect ??CFS, creates larger peak ??CFS(t) values northwest of the main shock. This asymmetry is also observed in seismicity rate changes but not in ??CFS. This result implies that dynamic stress changes are as effective as static stress changes in triggering aftershocks and may trigger earthquakes long after the waves have passed.
Biomass decay rates and tissue nutrient loss in bloom and non-bloom-forming macroalgal species
NASA Astrophysics Data System (ADS)
Conover, Jessie; Green, Lindsay A.; Thornber, Carol S.
2016-09-01
Macroalgal blooms occur in shallow, low-wave energy environments and are generally dominated by fast-growing ephemeral macroalgae. When macroalgal mats undergo senescence and decompose they can cause oxygen depletion and release nutrients into the surrounding water. There are relatively few studies that examine macroalgal decomposition rates in areas impacted by macroalgal blooms. Understanding the rate of macroalgal bloom decomposition is essential to understanding the impacts of macroalgal blooms following senescence. Here, we examined the biomass, organic content, nitrogen decay rates and δ15N values for five macroalgal species (the bloom-forming Agardhiella subulata, Gracilaria vermiculophylla, Ulva compressa, and Ulva rigida and the non-bloom-forming Fucus vesiculosus) in Narragansett Bay, Rhode Island, U.S.A. using a litterbag design. Bloom-forming macroalgae had similar biomass decay rates (0.34-0.51 k d-1) and decayed significantly faster than non-bloom-forming macroalgae (0.09 k d-1). Biomass decay rates also varied temporally, with a significant positive correlation between biomass decay rate and water temperature for U. rigida. Tissue organic content decreased over time in all species, although A. subulata and G. vermiculophylla displayed significantly higher rates of organic content decay than U. compressa, U. rigida, and F. vesiculosus. Agardhiella subulata had a significantly higher rate of tissue nitrogen decay (0.35 k d-1) than all other species. By contrast, only the δ15N of F. vesiculosus changed significantly over the decay period. Overall, our results indicate that bloom-forming macroalgal species decay more rapidly than non-bloom-forming species.
Calculations on decay rates of various proton emissions
NASA Astrophysics Data System (ADS)
Qian, Yibin; Ren, Zhongzhou
2016-03-01
Proton radioactivity of neutron-deficient nuclei around the dripline has been systematically studied within the deformed density-dependent model. The crucial proton-nucleus potential is constructed via the single-folding integral of the density distribution of daughter nuclei and the effective M3Y nucleon-nucleon interaction or the proton-proton Coulomb interaction. After the decay width is obtained by the modified two-potential approach, the final decay half-lives can be achieved by involving the spectroscopic factors from the relativistic mean-field (RMF) theory combined with the BCS method. Moreover, a simple formula along with only one adjusted parameter is tentatively proposed to evaluate the half-lives of proton emitters, where the introduction of nuclear deformation is somewhat discussed as well. It is found that the calculated results are in satisfactory agreement with the experimental values and consistent with other theoretical studies, indicating that the present approach can be applied to the case of proton emission. Predictions on half-lives are made for possible proton emitters, which may be useful for future experiments.
Long, Andrew M; Short, Steven M
2016-07-01
To address questions about algal virus persistence (i.e., continued existence) in the environment, rates of decay of infectivity for two viruses that infect Chlorella-like algae, ATCV-1 and CVM-1, and a virus that infects the prymnesiophyte Chrysochromulina parva, CpV-BQ1, were estimated from in situ incubations in a temperate, seasonally frozen pond. A series of experiments were conducted to estimate rates of decay of infectivity in all four seasons with incubations lasting 21 days in spring, summer and autumn, and 126 days in winter. Decay rates observed across this study were relatively low compared with previous estimates obtained for other algal viruses, and ranged from 0.012 to 11% h(-1). Overall, the virus CpV-BQ1 decayed most rapidly whereas ATCV-1 decayed most slowly, but for all viruses the highest decay rates were observed during the summer and the lowest were observed during the winter. Furthermore, the winter incubations revealed the ability of each virus to overwinter under ice as ATCV-1, CVM-1 and CpV-BQ1 retained up to 48%, 19% and 9% of their infectivity after 126 days, respectively. The observed resilience of algal viruses in a seasonally frozen freshwater pond provides a mechanism that can support the maintenance of viral seed banks in nature. However, the high rates of decay observed in the summer demonstrate that virus survival and therefore environmental persistence can be subject to seasonal bottlenecks. PMID:26943625
NASA Astrophysics Data System (ADS)
Ogata, Yosihiko; Tsuruoka, Hiroshi
2016-03-01
Early forecasting of aftershocks has become realistic and practical because of real-time detection of hypocenters. This study illustrates a statistical procedure for monitoring aftershock sequences to detect anomalies to increase the probability gain of a significantly large aftershock or even an earthquake larger than the main shock. In particular, a significant lowering (relative quiescence) in aftershock activity below the level predicted by the Omori-Utsu formula or the epidemic-type aftershock sequence model is sometimes followed by a large earthquake in a neighboring region. As an example, we detected significant lowering relative to the modeled rate after approximately 1.7 days after the main shock in the aftershock sequence of the Mw7.8 Gorkha, Nepal, earthquake of April 25, 2015. The relative quiescence lasted until the May 12, 2015, M7.3 Kodari earthquake that occurred at the eastern end of the primary aftershock zone. Space-time plots including the transformed time can indicate the local places where aftershock activity lowers (the seismicity shadow). Thus, the relative quiescence can be hypothesized to be related to stress shadowing caused by probable slow slips. In addition, the aftershock productivity of the M7.3 Kodari earthquake is approximately twice as large as that of the M7.8 main shock.
Beta decay rates of neutron-rich nuclei
NASA Astrophysics Data System (ADS)
Marketin, Tomislav; Huther, Lutz; Petković, Jelena; Paar, Nils; Martínez-Pinedo, Gabriel
2016-06-01
Heavy element nucleosynthesis models involve various properties of thousands of nuclei in order to simulate the intricate details of the process. By necessity, as most of these nuclei cannot be studied in a controlled environment, these models must rely on the nuclear structure models for input. Of all the properties, the beta-decay half-lives are one of the most important ones due to their direct impact on the resulting abundance distributions. In this study we present the results of a large-scale calculation based on the relativistic nuclear energy density functional, where both the allowed and the first-forbidden transitions are studied in more than 5000 neutron-rich nuclei. Aside from the astrophysical applications, the results of this calculation can also be employed in the modeling of the electron and antineutrino spectra from nuclear reactors.
Decay rates and electromagnetic transitions of heavy quarkonia
NASA Astrophysics Data System (ADS)
Pandya, J. N.; Soni, N. R.; Devlani, N.; Rai, A. K.
2015-12-01
The electromagnetic radiative transition widths for heavy quarkonia, as well as digamma and digluon decay widths, are computed in the framework of the extended harmonic confinement model (ERHM) and Coulomb plus power potential (CPPν) with varying potential index ν. The outcome is compared with the values obtained from other theoretical models and experimental results. While the mass spectra, digamma and digluon widths from ERHM as well as CPPν=1 are in good agreement with experimental data, the electromagnetic transition widths span over a wide range for the potential models considered here making it difficult to prefer a particular model over the others because of the lack of experimental data for most transition widths. Supported by University Grants Commission, India for Major Research Project F. No.42-775/2013(SR) (J N Pandya) and Dept. of Science and Technology, India, under SERC fast track scheme SR/FTP/PS-152/2012 (A K Rai)
Complex Configuration Effects on β-DECAY Rates
NASA Astrophysics Data System (ADS)
Severyukhin, A. P.; Voronov, V. V.; Borzov, I. N.; Arsenyev, N. N.; van Giai, Nguyen
2015-06-01
Starting from a Skyrme interaction the Gamow-Teller (GT) strength in the Qβ- window has been studied within a microscopic model including the 2p-2h configuration effects. The suggested approach enables one to perform the calculations in very large configuration spaces. As a result, the β--decay halflife is decreased due to the 2p - 2h fragmentation of GT states. Using the Skyrme interaction SGII with tensor terms we study this reduction effect for the neutron-rich N = 82 isotones below the doubly magic nucleus 132Sn. Predictions are given for 126Ru and 128Pd in comparison to 130Cd which is the r-process waiting-point nucleus.
Beta decay rates of neutron-rich nuclei
Marketin, Tomislav; Huther, Lutz; Martínez-Pinedo, Gabriel
2015-10-15
Heavy element nucleosynthesis models involve various properties of thousands of nuclei in order to simulate the intricate details of the process. By necessity, as most of these nuclei cannot be studied in a controlled environment, these models must rely on the nuclear structure models for input. Of all the properties, the beta-decay half-lives are one of the most important ones due to their direct impact on the resulting abundance distributions. Currently, a single large-scale calculation is available based on a QRPA calculation with a schematic interaction on top of the Finite Range Droplet Model. In this study we present the results of a large-scale calculation based on the relativistic nuclear energy density functional, where both the allowed and the first-forbidden transitions are studied in more than 5000 neutron-rich nuclei.
Continuum-state and bound-state β--decay rates of the neutron
NASA Astrophysics Data System (ADS)
Faber, M.; Ivanov, A. N.; Ivanova, V. A.; Marton, J.; Pitschmann, M.; Serebrov, A. P.; Troitskaya, N. I.; Wellenzohn, M.
2009-09-01
For the β--decay of the neutron we analyze the continuum-state and bound-state decay modes. We calculate the decay rates, the electron energy spectrum for the continuum-state decay mode, and angular distributions of the decay probabilities for the continuum-state and bound-state decay modes. The theoretical results are obtained for the new value for the axial coupling constant gA=1.2750(9), obtained recently by H. Abele [Prog. Part. Nucl. Phys. 60, 1 (2008)] from the fit of the experimental data on the coefficient of the correlation of the neutron spin and the electron momentum of the electron energy spectrum of the continuum-state decay mode. We take into account the contribution of radiative corrections and the scalar and tensor weak couplings. The calculated angular distributions of the probabilities of the bound-state decay modes of the polarized neutron can be used for the experimental measurements of the bound-state β--decays into the hyperfine states with total angular momentum F=1 and scalar and tensor weak coupling constants.
Prolonged decay of molecular rate estimates for metazoan mitochondrial DNA
Ho, Simon Y.W.
2015-01-01
Evolutionary timescales can be estimated from genetic data using the molecular clock, often calibrated by fossil or geological evidence. However, estimates of molecular rates in mitochondrial DNA appear to scale negatively with the age of the clock calibration. Although such a pattern has been observed in a limited range of data sets, it has not been studied on a large scale in metazoans. In addition, there is uncertainty over the temporal extent of the time-dependent pattern in rate estimates. Here we present a meta-analysis of 239 rate estimates from metazoans, representing a range of timescales and taxonomic groups. We found evidence of time-dependent rates in both coding and non-coding mitochondrial markers, in every group of animals that we studied. The negative relationship between the estimated rate and time persisted across a much wider range of calibration times than previously suggested. This indicates that, over long time frames, purifying selection gives way to mutational saturation as the main driver of time-dependent biases in rate estimates. The results of our study stress the importance of accounting for time-dependent biases in estimating mitochondrial rates regardless of the timescale over which they are inferred. PMID:25780773
Seasonal variations of decay rate measurement data and their interpretation.
Schrader, Heinrich
2016-08-01
Measurement data of long-lived radionuclides, for example, (85)Kr, (90)Sr, (108m)Ag, (133)Ba, (152)Eu, (154)Eu and (226)Ra, and particularly the relative residuals of fitted raw data from current measurements of ionization chambers for half-life determination show small periodic seasonal variations with amplitudes of about 0.15%. The interpretation of these fluctuations is a matter of controversy whether the observed effect is produced by some interaction with the radionuclides themselves or is an artifact of the measuring chain. At the origin of such a discussion there is the exponential decay law of radioactive substances used for data fitting, one of the fundamentals of nuclear physics. Some groups of physicists use statistical methods and analyze correlations with various parameters of the measurement data and, for example, the Earth-Sun distance, as a basis of interpretation. In this article, data measured at the Physikalisch-Technische Bundesanstalt and published earlier are the subject of a correlation analysis using the corresponding time series of data with varying measurement conditions. An overview of these measurement conditions producing instrument instabilities is given and causality relations are discussed. The resulting correlation coefficients for various series of the same radionuclide using similar measurement conditions are in the order of 0.7, which indicates a high correlation, and for series of the same radionuclide using different measurement conditions and changes of the measuring chain of the order of -0.2 or even lower, which indicates an anti-correlation. These results provide strong arguments that the observed seasonal variations are caused by the measuring chain and, in particular, by the type of measuring electronics used. PMID:27258217
Hawking-Moss Bounces and Vacuum Decay Rates
Weinberg, Erick J.
2007-06-22
The conventional interpretation of the Hawking-Moss (HM) solution implies a transition rate between vacua that depends only on the values of the potential in the initial vacuum and at the top of a potential barrier, leading to the implausible conclusion that transitions to distant vacua can be as likely as those to a nearby one. I analyze this issue using a nongravitational example with analogous properties. I show that such HM bounces do not give reliable rate calculations, but are instead related to the probability of finding a quasistable configuration at a local potential maximum.
WEST NILE VIRUS ANTIBODY DECAY RATE IN FREE-RANGING BIRDS.
McKee, Eileen M; Walker, Edward D; Anderson, Tavis K; Kitron, Uriel D; Brawn, Jeffrey D; Krebs, Bethany L; Newman, Christina; Ruiz, Marilyn O; Levine, Rebecca S; Carrington, Mary E; McLean, Robert G; Goldberg, Tony L; Hamer, Gabriel L
2015-07-01
Antibody duration, following a humoral immune response to West Nile virus (WNV) infection, is poorly understood in free-ranging avian hosts. Quantifying antibody decay rate is important for interpreting serologic results and for understanding the potential for birds to serorevert and become susceptible again. We sampled free-ranging birds in Chicago, Illinois, US, from 2005 to 2011 and Atlanta, Georgia, US, from 2010 to 2012 to examine the dynamics of antibody decay following natural WNV infection. Using serial dilutions in a blocking enzyme-linked immunosorbent assay, we quantified WNV antibody titer in repeated blood samples from individual birds over time. We quantified a rate of antibody decay for 23 Northern Cardinals (Cardinalis cardinalis) of 0.198 natural log units per month and 24 individuals of other bird species of 0.178 natural log units per month. Our results suggest that juveniles had a higher rate of antibody decay than adults, which is consistent with nonlinear antibody decay at different times postexposure. Overall, most birds had undetectable titers 2 yr postexposure. Nonuniform WNV antibody decay rates in free-ranging birds underscore the need for cautious interpretation of avian serology results in the context of arbovirus surveillance and epidemiology. PMID:25919465
Vitamin C: Rate of Decay and Stability Characteristics
ERIC Educational Resources Information Center
Kakis, Frederic J.; Rossi, Carl J.
1974-01-01
Describes an experiment designed to provide the opportunity for studying some of the parameters affecting the stability of Vitamin C in various environments, and to acquaint the student with an experimental procedure for studying simple reaction kinetics and the calculations of specific rate constants. (Author/JR)
Spectra and decay rates of bb¯ meson using Gaussian wave function
NASA Astrophysics Data System (ADS)
Rai, Ajay Kumar; Devlani, Nayneshkumar; Kher, Virendrasinh H.
2015-05-01
Using the Gaussian wave function mass spectra and decay rates of bb¯ meson are investigated in the framework of phenomenological quark anti-quark potential (coulomb plus power) model consisting of relativistic corrections to the kinetic energy term. The spin-spin, spin-orbit and tensor interactions are employed to obtain the pseudoscalar and vector meson masses. The decay constants (fP/V) are computed using the wave function at the origin. The di-gamma and di-leptonic decays of the bb¯ meson are investigated using Van-Rayan Weisskopf formula as well as in the NRQCD formalism.
NASA Astrophysics Data System (ADS)
Michas, Georgios; Vallianatos, Filippos; Karakostas, Vassilios; Papadimitriou, Eleftheria; Sammonds, Peter
2014-05-01
Efpalion aftershock sequence occurred in January 2010, when an M=5.5 earthquake was followed four days later by another strong event (M=5.4) and numerous aftershocks (Karakostas et al., 2012). This activity interrupted a 15 years period of low to moderate earthquake occurrence in Corinth rift, where the last major event was the 1995 Aigion earthquake (M=6.2). Coulomb stress analysis performed in previous studies (Karakostas et al., 2012; Sokos et al., 2012; Ganas et al., 2013) indicated that the second major event and most of the aftershocks were triggered due to stress transfer. The aftershocks production rate decays as a power-law with time according to the modified Omori law (Utsu et al., 1995) with an exponent larger than one for the first four days, while after the occurrence of the second strong event the exponent turns to unity. We consider the earthquake sequence as a point process in time and space and study its spatiotemporal evolution considering a Continuous Time Random Walk (CTRW) model with a joint probability density function of inter-event times and jumps between the successive earthquakes (Metzler and Klafter, 2000). Jump length distribution exhibits finite variance, whereas inter-event times scale as a q-generalized gamma distribution (Michas et al., 2013) with a long power-law tail. These properties are indicative of a subdiffusive process in terms of CTRW. Additionally, the mean square displacement of aftershocks is constant with time after the occurrence of the first event, while it changes to a power-law with exponent close to 0.15 after the second major event, illustrating a slow diffusive process. During the first four days aftershocks cluster around the epicentral area of the second major event, while after that and taking as a reference the second event, the aftershock zone is migrating slowly with time to the west near the epicentral area of the first event. This process is much slower from what would be expected from normal diffusion, a
Decay rates of charmonia within a quark-antiquark confining potential
NASA Astrophysics Data System (ADS)
Smruti, Patel; Vinodkumar, P. C.; Shashank, Bhatnagar
2016-05-01
In this work, we investigate the spectroscopy and decay rates of charmonia within the framework of the non-relativistic Schrödinger equation by employing an approximate inter quark-antiquark potential. The spin hyperfine, spin-orbit and tensor components of the one gluon exchange interaction are employed to compute the spectroscopy of the excited S states and a few low-lying P and D waves. The resultant wave functions at zero inter-quark separation as well as some finite separations are employed to predict the di-gamma, di-leptonic and di-gluon decay rates of charmonia states using the conventional Van Royen-Weisskopf formula. The di-gamma and di-leptonic decay widths are also computed by incorporating the relativistic corrections of order v 4 within the NRQCD formalism. We have observed that the NRQCD predictions with their matrix elements computed at finite radial separation yield results which are found to be in better agreement with experimental values for both di-gamma and di-leptonic decays. The same scenario is seen in the case when di-gamma and di-leptonic decay widths are computed with the Van Royen-Weisskopf formula. It is also observed that the di-gluon decay width with the inclusion of binding energy effects are in better agreement with the experimental data available for 1S-2S and 1P. The di-gluon decay width of 3S and 2P waves waves are also predicted. Thus, the present study of decay rates clearly indicates the importance of binding energy effects. Supported by Major Research Project NO. F. 40-457/2011(SR), UGC, India
Neglected role of fungal community composition in explaining variation in wood decay rates.
Van der Wal, A; Ottosson, E; De Boer, W
2015-01-01
Decomposition of wood is an important component of global carbon cycling. Most wood decomposition models are based on tree characteristics and environmental conditions; however, they do not include community dynamics of fungi that are the major wood decomposers. We examined the factors explaining variation in sapwood decay in oak tree stumps two and five years after cutting. Wood moisture content was significantly correlated with sapwood decay in younger stumps, whereas ITS-based composition and species richness of the fungal community were the best predictors for mass loss in the older stumps. Co-occurrence analysis showed that, in freshly cut trees and in younger stumps, fungal communities were nonrandomly structured, whereas fungal communities in old stumps could not be separated from a randomly assembled community. These results indicate that the most important factors explaining variation in wood decay rates can change over time and that the strength of competitive interactions between fungi in decaying tree stumps may level off with increased wood decay. Our field analysis further suggests that ascomycetes may have a prominent role in wood decay, but their wood-degrading abilities need to be further tested under controlled conditions. The next challenging step will be to integrate fungal community assembly processes in wood decay models to improve carbon sequestration estimates of forests. PMID:26236897
Aftershock patterns and main shock faulting
Mendoza, C.; Hartzell, S.H.
1988-01-01
We have compared aftershock patterns following several moderate to large earthquakes with the corresponding distributions of coseismic slip obtained from previous analyses of the recorded strong ground motion and teleseismic waveforms. Our results are consistent with a hypothesis of aftershock occurrence that requires a secondary redistribution of stress following primary failure on the earthquake fault. Aftershocks followng earthquakes examined in this study occur mostly outside of or near the edges of the source areas indicated by the patterns of main shock slip. The spatial distribution of aftershocks reflects either a continuation of slip in the outer regions of the areas of maximum coseismic displacement or the activation of subsidiary faults within the volume surrounding the boundaries of main shock rupture. -from Authors
Nonlinear Stability Analysis with Decay Rates of Two Classes of Waves for Conservation Laws.
NASA Astrophysics Data System (ADS)
Zingano, Paulo Ricardo
1990-08-01
We study in this work the decay rate of disturbances to certain elementary waves for conservation laws when their initial profile is perturbed. In the first problem, rarefaction waves for the scalar equation u_ {t}+f(u)_{x}=u_{xx }, f convex, are considered, and we show that disturbances decay in the L^2 -norm as O(t^{-1/4+mu }), for mu > 0 arbitrarily small, provided they belong to the space L^1cap H^1 initially. The second problem concerns the stability of weak shock waves of a certain class of hyperbolic systems with relaxation, disturbances in this case are shown to decay in L ^2 at certain algebraic rates which depend on how fast they die off as x to +/- infty at initial time, provided they are sufficiently weak. This behavior is due to the compressibility of such waves with respect to the dynamic characteristics governing the propagation of disturbances, a basic feature of shock waves. This result is in vivid contrast to the corresponding one for rarefaction waves, where the decay is ultimately governed by diffusion processes which impose a limit on the overall rate. In both problems treated here, the analysis is based on the derivation of suitable energy inequalities with appropriate decay rates.
Sensitivity studies for the main r process: β-decay rates
Mumpower, M.; Cass, J.; Passucci, G.; Aprahamian, A.; Surman, R.
2014-04-15
The pattern of isotopic abundances produced in rapid neutron capture, or r-process, nucleosynthesis is sensitive to the nuclear physics properties of thousands of unstable neutron-rich nuclear species that participate in the process. It has long been recognized that the some of the most influential pieces of nuclear data for r-process simulations are β-decay lifetimes. In light of experimental advances that have pushed measurement capabilities closer to the classic r-process path, we revisit the role of individual β-decay rates in the r process. We perform β-decay rate sensitivity studies for a main (A > 120) r process in a range of potential astrophysical scenarios. We study the influence of individual rates during (n, γ)-(γ, n) equilibrium and during the post-equilibrium phase where material moves back toward stability. We confirm the widely accepted view that the most important lifetimes are those of nuclei along the r-process path for each astrophysical scenario considered. However, we find in addition that individual β-decay rates continue to shape the final abundance pattern through the post-equilibrium phase, for as long as neutron capture competes with β decay. Many of the lifetimes important for this phase of the r process are within current or near future experimental reach.
Coordinate-dependent diffusion coefficients: Decay rate in open quantum systems
Sargsyan, V. V.; Palchikov, Yu. V.; Antonenko, N. V.; Kanokov, Z.; Adamian, G. G.
2007-06-15
Based on a master equation for the reduced density matrix of an open quantum collective system, the influence of coordinate-dependent microscopical diffusion coefficients on the decay rate from a metastable state is treated. For various frictions and temperatures larger than a crossover temperature, the quasistationary decay rates obtained with the coordinate-dependent microscopical set of diffusion coefficients are compared with those obtained with the coordinate-independent microscopical set of diffusion coefficients and coordinate-independent and -dependent phenomenological sets of diffusion coefficients. Neglecting the coordinate dependence of diffusion coefficients, one can strongly overestimate or underestimate the decay rate at low temperature. The coordinate-dependent phenomenological diffusion coefficient in momentum are shown to be suitable for applications.
Beyond the bucket: testing the effect of experimental design on rate and sequence of decay
NASA Astrophysics Data System (ADS)
Gabbott, Sarah; Murdock, Duncan; Purnell, Mark
2016-04-01
Experimental decay has revealed the potential for profound biases in our interpretations of exceptionally preserved fossils, with non-random sequences of character loss distorting the position of fossil taxa in phylogenetic trees. By characterising these sequences we can rewind this distortion and make better-informed interpretations of the affinity of enigmatic fossil taxa. Equally, rate of character loss is crucial for estimating the preservation potential of phylogentically informative characters, and revealing the mechanisms of preservation themselves. However, experimental decay has been criticised for poorly modeling 'real' conditions, and dismissed as unsophisticated 'bucket science'. Here we test the effect of a differing experimental parameters on the rate and sequence of decay. By doing so, we can test the assumption that the results of decay experiments are applicable to informing interpretations of exceptionally preserved fossils from diverse preservational settings. The results of our experiments demonstrate the validity of using the sequence of character loss as a phylogenetic tool, and sheds light on the extent to which environment must be considered before making decay-informed interpretations, or reconstructing taphonomic pathways. With careful consideration of experimental design, driven by testable hypotheses, decay experiments are robust and informative - experimental taphonomy needn't kick the bucket just yet.
Aftershocks in a time-to-failure slider-block model
NASA Astrophysics Data System (ADS)
Gran, J. D.; Rundle, J. B.; Turcotte, D. L.
2011-12-01
Several earthquake models have been used to study the mechanisms that lead to a Gutenberg-Richter distribution of earthquake magnitudes. One such model is the cellular automaton (CA) slider-block model. Events (earthquakes) in this model are initiated by a loader plate increasing stress uniformly on all blocks until a single block reaches a static friction failure threshold which can trigger a cascade of failures of blocks. This model, although useful, misses a key part of the earthquake process, i.e. aftershocks. Aftershocks occur within a short time period following the main-shock and are due to stress redistributions within the earth's crust rather than movement of the interacting tectonic plates. We describe here a modified version of CA slider-block model, which includes a time-to-failure mode, that allows blocks to fail below the static threshold value if enough time passes. This new feature allows multiple independent events to occur during a single plate update. We measure time in Monte Carlo steps and have tested various functions for the time-to-failure to understand the connection between the time-to-failure and Omori's law for the frequency of aftershocks following the main-shock. After each loader plate update, we see a main-shock followed in time by multiple aftershocks that decay in magnitude. We believe this to be another mechanism for the occurrence of aftershocks in addition to that found by Dietrich, JGR(1994).
NASA Astrophysics Data System (ADS)
Labak, P.; Ford, S. R.; Sweeney, J. J.; Smith, A. T.; Spivak, A.
2011-12-01
One of four elements of CTBT verification regime is On-site inspection (OSI). Since the sole purpose of an OSI shall be to clarify whether a nuclear weapon test explosion or any other nuclear explosion has been carried out, inspection activities can be conducted and techniques used in order to collect facts to support findings provided in inspection reports. Passive seismological monitoring, realized by the seismic aftershock monitoring (SAMS) is one of the treaty allowed techniques during an OSI. Effective planning and deployment of SAMS during the early stages of an OSI is required due to the nature of possible events recorded and due to the treaty related constrains on size of inspection area, size of inspection team and length of an inspection. A method, which may help in planning the SAMS deployment is presented. An estimate of aftershock activity due to a theoretical underground nuclear explosion is produced using a simple aftershock rate model (Ford and Walter, 2010). The model is developed with data from the Nevada Test Site and Semipalatinsk Test Site, which we take to represent soft- and hard-rock testing environments, respectively. Estimates of expected magnitude and number of aftershocks are calculated using the models for different testing and inspection scenarios. These estimates can help to plan the SAMS deployment for an OSI by giving a probabilistic assessment of potential aftershocks in the Inspection Area (IA). The aftershock assessment combined with an estimate of the background seismicity in the IA and an empirically-derived map of threshold magnitude for the SAMS network could aid the OSI team in reporting. We tested the hard-rock model to a scenario similar to the 2008 Integrated Field Exercise 2008 deployment in Kazakhstan and produce an estimate of possible recorded aftershock activity.
Analysis of Mw 7.2 2014 Molucca Sea earthquake and its aftershocks
NASA Astrophysics Data System (ADS)
Shiddiqi, Hasbi Ash; Widiyantoro, Sri; Nugraha, Andri Dian; Ramdhan, Mohamad; Wiyono, Samsul Hadi; Wandono, Wandono
2016-05-01
A Mw 7.2 earthquake struck an area in the Molucca Sea region on November 15, 2014, and was followed by more than 300 aftershocks until the end of December 2014. This earthquake was the second largest event in the Molucca Sea during the last decade and was well recorded by local networks. Although the seismicity rate of the aftershocks was declining at the end of 2014, several significant earthquakes with magnitude (Mw) larger than five still occurred from January to May 2015 within the vicinity of the mainshock location. In this study, we investigated the earthquake process and its relation to the increasing seismicity in the Molucca Sea within six months after the earthquake. We utilized teleseismic double-difference hypocenter relocation method using local, regional, and teleseismic direct body-wave arrival times of 514 earthquakes from the time of mainshock occurrence to May 2015. Furthermore, we analyzed the focal mechanism solutions from the National Research Institute for Earth Science and Disaster Prevention (NIED), Japan. From our results, we observed that aftershocks propagated along the NNE-SSW direction within a 100 km fault segment length of the Mayu Ridge. The highest number of the aftershocks was located in the SSW direction of the main event. The aftershocks were terminated at around 60 km depth, which may represent the location of the top of the Molucca Sea Plate (MSP). Between January and May 2015, several significant earthquakes propagated westward and were extended to the Molucca Sea slab. From focal mechanism catalog, we found that the mainshock mechanism was reverse with strike 192o and dip 55o. While most of the large aftershock mechanisms were consistent with the main event, several aftershocks had reverse, oblique mechanisms. Stress inversion result from focal mechanism data revealed that the maximum stress direction was SE and was not perpendicular with fault direction. We suggest that the non-perpendicular maximum stress caused several
An Explosion Aftershock Model with Application to On-Site Inspection
NASA Astrophysics Data System (ADS)
Ford, Sean R.; Labak, Peter
2016-01-01
An estimate of aftershock activity due to a theoretical underground nuclear explosion is produced using an aftershock rate model. The model is developed with data from the Nevada National Security Site, formerly known as the Nevada Test Site, and the Semipalatinsk Test Site, which we take to represent soft-rock and hard-rock testing environments, respectively. Estimates of expected magnitude and number of aftershocks are calculated using the models for different testing and inspection scenarios. These estimates can help inform the Seismic Aftershock Monitoring System (SAMS) deployment in a potential Comprehensive Test Ban Treaty On-Site Inspection (OSI), by giving the OSI team a probabilistic assessment of potential aftershocks in the Inspection Area (IA). The aftershock assessment, combined with an estimate of the background seismicity in the IA and an empirically derived map of threshold magnitude for the SAMS network, could aid the OSI team in reporting. We apply the hard-rock model to a M5 event and combine it with the very sensitive detection threshold for OSI sensors to show that tens of events per day are expected up to a month after an explosion measured several kilometers away.
Non-Markovian dynamics of quantum systems. II. Decay rate, capture, and pure states
Palchikov, Yu.V.; Antonenko, N.V.; Kanokov, Z.; Adamian, G.G.; Scheid, W.
2005-01-01
On the basis of a master equation for the reduced density matrix of open quantum systems, we study the influence of time-dependent friction and diffusion coefficients on the decay rate from a potential well and the capture probability into a potential well. Taking into account the mixed diffusion coefficient D{sub qp}, the quasistationary decay rates are compared with the analytically derived Kramers-type formulas for different temperatures and frictions. The diffusion coefficients supplying the purity of states are derived for a non-Markovian dynamics.
NASA Astrophysics Data System (ADS)
Schaefer, Andreas; Daniell, James; Khazai, Bijan; Wenzel, Friedemann
2016-04-01
The occurrence and impact of strong earthquakes often triggers the long-lasting impact of a seismic sequence. Strong earthquakes are generally followed by many aftershocks or even strong subsequently triggered ruptures. The Nepal 2015 earthquake sequence is one of the most recent examples where aftershocks significantly contributed to human and economic losses. In addition, rumours about upcoming mega-earthquakes, false predictions and on-going cycles of aftershocks induced a psychological burden on the society, which caused panic, additional casualties and prevented people from returning to normal life. This study shows the current phase of development of an operationalised aftershock intensity index, which will contribute to the mitigation of aftershock hazard. Hereby, various methods of earthquake forecasting and seismic risk assessments are utilised and an integration of the inherent aftershock intensity is performed. A spatio-temporal analysis of past earthquake clustering provides first-hand data about the nature of aftershock occurrence. Epidemic methods can additionally provide time-dependent variation indices of the cascading effects of aftershock generation. The aftershock hazard is often combined with the potential for significant losses through the vulnerability of structural systems and population. A historical database of aftershock socioeconomic effects from CATDAT has been used in order to calibrate the index based on observed impacts of historical events and their aftershocks. In addition, analytical analysis of cyclic behaviour and fragility functions of various building typologies are explored. The integration of many different probabilistic computation methods will provide a combined index parameter which can then be transformed into an easy-to-read spatio-temporal intensity index. The index provides daily updated information about the probability of the inherent seismic risk of aftershocks by providing a scalable scheme fordifferent aftershock
Vázquez, J. L.
2010-01-01
The goal of this paper is to state the optimal decay rate for solutions of the nonlinear fast diffusion equation and, in self-similar variables, the optimal convergence rates to Barenblatt self-similar profiles and their generalizations. It relies on the identification of the optimal constants in some related Hardy–Poincaré inequalities and concludes a long series of papers devoted to generalized entropies, functional inequalities, and rates for nonlinear diffusion equations. PMID:20823259
Disproof of solar influence on the decay rates of 90Sr/90 Y
NASA Astrophysics Data System (ADS)
Kossert, Karsten; Nähle, Ole J.
2015-09-01
A custom-built liquid scintillation counter was used for long-term measurements of 90Sr/90 Y sources. The detector system is equipped with an automated sample changer and three photomultiplier tubes, which makes the application of the triple-to-double coincidence ratio (TDCR) method possible. After decay correction, the measured decay rates were found to be stable and no annual oscillation could be observed. Thus, the findings of this work are in strong contradiction to those of Parkhomov (2011) who reported on annual oscillations when measuring 90Sr/90 Y with a Geiger-Müller counter. Sturrock et al. (2012) carried out a more detailed analysis of the experimental data from Parkhomov and claimed to have found correlations between the decay rates and processes inside the Sun. These findings are questionable, since they are based on inappropriate experimental data as is demonstrated in this work. A frequency analysis of our activity data does not show any significant periodicity.
NASA Astrophysics Data System (ADS)
Glukhov, I. L.; Nekipelov, E. A.; Ovsiannikov, V. D.
2010-06-01
New features of the blackbody-induced radiation processes on Rydberg atoms were discovered on the basis of numerical data for the blackbody-induced decay Pdnl(T), excitation Penl(T) and ionization Pionnl(T) rates of nS, nP and nD Rydberg states calculated together with the spontaneous decay rates Pspnl in neutral hydrogen, and singlet and triplet helium atoms for some values of the principal quantum number n from 10 to 500 at temperatures from T = 100 K to 2000 K. The fractional rates Rd(e, ion)nl(T) = Pnld(e, ion)(T)/Pspnl equal to the ratio of the induced decay (excitation, ionization) rates to the rate of spontaneous decay were determined as functions of T and n in every series of states with a given angular momentum l = 0, 1, 2. The calculated data reveal an essential difference between the asymptotic dependence of the ionization rate Pionnl(T) and the rates of decay and excitation Pd(e)nl(T)~T/n2. The departures appear in each Rydberg series for n > 100 and introduce appreciable corrections to the formula of Cooke and Gallagher. Two different approximation formulae are proposed on the basis of the numerical data, one for Rd(e)nl(T) and another one for Rionnl(T), which reproduce the calculated values in wide ranges of principal quantum number from n = 10 to 1000 and temperatures between T = 100 K and T = 2000 K with an accuracy of 2% or better. Modified Fues' model potential approach was used for calculating matrix elements of bound-bound and bound-free radiation transitions in helium.
Evidence for correlations between fluctuations in 54Mn decay rates and solar storms
NASA Astrophysics Data System (ADS)
Mohsinally, T.; Fancher, S.; Czerny, M.; Fischbach, E.; Gruenwald, J. T.; Heim, J.; Jenkins, J. H.; Nistor, J.; O'Keefe, D.
2016-02-01
Following recent indications that several radioactive isotopes show fluctuating decay rates which may be influenced by solar activity, we present findings from a 2 year period of data collection on 54Mn. Measurements were recorded hourly from a 1 μCi sample of 54Mn monitored from January 2010-December 2011. A series of signal-detection algorithms determine regions of statistically significant fluctuations in decay behaviour from the expected exponential form. The 239 decay flags identified during this interval were compared to daily distributions of multiple solar indices, generated by NOAA, which are associated with heightened solar activity. The indices were filtered to provide a list of the 413 strongest events during a coincident period. We find that 49% of the strongest solar events are preceded by at least 1 decay flag within a 48 h interval, and 37% of decay flags are followed by a reported solar event within 48 h. These results are significant at the 0.9σ and 2.8σ levels respectively, based on a comparison to results obtained from a shuffle test, in which the decay measurements were randomly shuffled in time 10,000 times. We also present results from a simulation combining constructed data reflecting 10 sites which compared and filtered decay flags generated from all sites. The results indicate a potential 35% reduction in the false positive rate in going from 1 to 10 sites. By implication, the improved statistics attest to the benefit of analysing data from a larger number of geographically distributed sites in parallel.
Delayed Triggering of Early Aftershocks by Multiple Waves Circling the Earth
NASA Astrophysics Data System (ADS)
Sullivan, B.; Peng, Z.
2011-12-01
It is well known that direct surface waves of large earthquakes are capable of triggering shallow earthquakes and deep tremor at long-range distances. Recent studies have shown that multiple surface waves circling the earth could also remotely trigger microearthquakes [Peng et al., 2011]. However, it is still not clear whether multiple surface waves returning back to the mainshock epicenters could also trigger/modulate aftershock activities. Here we conduct a study to search for evidence of such triggering by systematically examining aftershock activities of 20 magnitude-8-or-higher earthquakes since 1990 that are capable of producing surface waves circling the globe repeatedly. We compute the magnitude of completeness for each sequence, and stack all the sequences together to compute the seismicity and moment rates by sliding data windows. The sequences are also shuffled randomly and these rates are compared to the actual data as well as synthetic aftershock sequences to estimate the statistical significance of the results. We also compare them with varying stacks of magnitude 7-8 earthquakes to better understand the possible biases that could be introduced by our rate calculation method. Our preliminary results suggest that there is some moderate increase of early aftershock activity after a few hours when the surface waves return to the epicentral region. However, we could not completely rule out the possibility that such an increase is purely due to random fluctuations of aftershocks or caused by missing aftershocks in the first few hours after the mainshock. We plan to examine continuous waveform data of selected sequences to obtain a better understanding of the multiple surface waves and aftershock activity.
False vacuum transitions —Analytical solutions and decay rate values
NASA Astrophysics Data System (ADS)
Correa, R. A. C.; Moraes, P. H. R. S.; da Rocha, Roldão
2015-08-01
In this work we show a class of oscillating configurations for the evolution of the domain walls in Euclidean space. The solutions are obtained analytically. Phase transitions are achieved from the associated fluctuation determinant, by the decay rates of the false vacuum.
Universal behavior of the spin-echo decay rate in La2CuO4
NASA Astrophysics Data System (ADS)
Chubukov, Andrey V.; Sachdev, Subir; Sokol, Alexander
1994-04-01
We present a theoretical expression for the spin-echo decay rate 1/T2G in the quantum-critical regime of square-lattice quantum antiferromagnets. Our results are in good agreement with recent experimental data by Imai et al. [Phys. Rev. Lett. 71, 1254 (1993)] for La2CuO4.
Conserved Non-Coding Sequences are Associated with Rates of mRNA Decay in Arabidopsis
Spangler, Jacob B.; Feltus, Frank Alex
2013-01-01
Steady-state mRNA levels are tightly regulated through a combination of transcriptional and post-transcriptional control mechanisms. The discovery of cis-acting DNA elements that encode these control mechanisms is of high importance. We have investigated the influence of conserved non-coding sequences (CNSs), DNA patterns retained after an ancient whole genome duplication event, on the breadth of gene expression and the rates of mRNA decay in Arabidopsis thaliana. The absence of CNSs near α duplicate genes was associated with a decrease in breadth of gene expression and slower mRNA decay rates while the presence CNSs near α duplicates was associated with an increase in breadth of gene expression and faster mRNA decay rates. The observed difference in mRNA decay rate was fastest in genes with CNSs in both non-transcribed and transcribed regions, albeit through an unknown mechanism. This study supports the notion that some Arabidopsis CNSs regulate the steady-state mRNA levels through post-transcriptional control mechanisms and that CNSs also play a role in controlling the breadth of gene expression. PMID:23675377
Estimate Of The Decay Rate Constant of Hydrogen Sulfide Generation From Landfilled Drywall
Research was conducted to investigate the impact of particle size on H2S gas emissions and estimate a decay rate constant for H2S gas generation from the anaerobic decomposition of drywall. Three different particle sizes of regular drywall and one particle size of paperless drywa...
The rate of decay of fresh fission products from a nuclear reactor
NASA Astrophysics Data System (ADS)
Dolan, David J.
Determining the rate of decay of fresh fission products from a nuclear reactor is complex because of the number of isotopes involved, different types of decay, half-lives of the isotopes, and some isotopes decay into other radioactive isotopes. Traditionally, a simplified rule of 7s and 10s is used to determine the dose rate from nuclear weapons and can be to estimate the dose rate from fresh fission products of a nuclear reactor. An experiment was designed to determine the dose rate with respect to time from fresh fission products of a nuclear reactor. The experiment exposed 0.5 grams of unenriched Uranium to a fast and thermal neutron flux from a TRIGA Research Reactor (Lakewood, CO) for ten minutes. The dose rate from the fission products was measured by four Mirion DMC 2000XB electronic personal dosimeters over a period of six days. The resulting dose rate following a rule of 10s: the dose rate of fresh fission products from a nuclear reactor decreases by a factor of 10 for every 10 units of time.
Invariance of decay rate with respect to boundary conditions in thermoelastic Timoshenko systems
NASA Astrophysics Data System (ADS)
Alves, M. S.; Jorge Silva, M. A.; Ma, T. F.; Muñoz Rivera, J. E.
2016-06-01
This paper is mainly concerned with the polynomial stability of a thermoelastic Timoshenko system recently introduced by Almeida Júnior et al. (Z Angew Math Phys 65(6):1233-1249, 2014) that proved, in the general case when equal wave speeds are not assumed, different polynomial decay rates depending on the boundary conditions, namely, optimal rate {t^{-1/2}} for mixed Dirichlet-Neumann boundary condition and rate {t^{-1/4}} for full Dirichlet boundary condition. Here, our main achievement is to prove the same polynomial decay rate {t^{-1/2}} (corresponding to the optimal one) independently of the boundary conditions, which improves the existing literature on the subject. As a complementary result, we also prove that the system is exponentially stable under equal wave speeds assumption. The technique employed here can probably be applied to other kind of thermoelastic systems.
Configuration splitting and gamma-decay transition rates in the two-group shell model
Isakov, V. I.
2015-09-15
Expressions for reduced gamma-decay transition rates were obtained on the basis of the twogroup configuration model for the case of transitions between particles belonging to identical groups of nucleons. In practical applications, the present treatment is the most appropriate for describing decays for odd–odd nuclei in the vicinity of magic nuclei or for nuclei where the corresponding subshells stand out in energy. Also, a simple approximation is applicable to describing configuration splitting in those cases. The present calculations were performed for nuclei whose mass numbers are close to A ∼ 90, including N = 51 odd—odd isotones.
Measurement of the decay rate of the SiH feature as a function of temperature
NASA Technical Reports Server (NTRS)
Nuth, Joseph A., III; Kraus, George F.
1994-01-01
We have previously suggested that the SiH fundamental stretch could serve as a diagnostic indicator of the oxidation state of silicate surfaces exposed to the solar wind for prolonged periods. We have now measured the primary decay rate of SiH in vacuo as a function of temperature and find that the primary rate constant for the decay can be characterized by the following equation: k(min(exp -1)) approximately equals 0.186 exp(-9/RT) min(exp -1), where R = 2 x 10(exp -3) kcal deg(exp -1) mole(exp -1). This means that the half-life for the decay of the SiH feature at room temperature is approximately 20 yrs, whereas the half-life at a peak lunar regolith temperature of approximately 500K would be only approximately 20 days. At the somewhat lower temperature of approximately 400K the half-life for the decay is on the order of 200 days. The rate of loss of SiH as a function of temperature provides an upper limit to the quantity of H implanted by the solar wind which can be retained by a silicate grain in a planetary regolith. This will be discussed in more detail here.
Beta-decay rate and beta-delayed neutron emission probability of improved gross theory
NASA Astrophysics Data System (ADS)
Koura, Hiroyuki
2014-09-01
A theoretical study has been carried out on beta-decay rate and beta-delayed neutron emission probability. The gross theory of the beta decay is based on an idea of the sum rule of the beta-decay strength function, and has succeeded in describing beta-decay half-lives of nuclei overall nuclear mass region. The gross theory includes not only the allowed transition as the Fermi and the Gamow-Teller, but also the first-forbidden transition. In this work, some improvements are introduced as the nuclear shell correction on nuclear level densities and the nuclear deformation for nuclear strength functions, those effects were not included in the original gross theory. The shell energy and the nuclear deformation for unmeasured nuclei are adopted from the KTUY nuclear mass formula, which is based on the spherical-basis method. Considering the properties of the integrated Fermi function, we can roughly categorized energy region of excited-state of a daughter nucleus into three regions: a highly-excited energy region, which fully affect a delayed neutron probability, a middle energy region, which is estimated to contribute the decay heat, and a region neighboring the ground-state, which determines the beta-decay rate. Some results will be given in the presentation. A theoretical study has been carried out on beta-decay rate and beta-delayed neutron emission probability. The gross theory of the beta decay is based on an idea of the sum rule of the beta-decay strength function, and has succeeded in describing beta-decay half-lives of nuclei overall nuclear mass region. The gross theory includes not only the allowed transition as the Fermi and the Gamow-Teller, but also the first-forbidden transition. In this work, some improvements are introduced as the nuclear shell correction on nuclear level densities and the nuclear deformation for nuclear strength functions, those effects were not included in the original gross theory. The shell energy and the nuclear deformation for
High frequencies are a critical component of aftershock triggering at <100-150 km (Invited)
NASA Astrophysics Data System (ADS)
Felzer, K. R.
2010-12-01
Triggered earthquakes at large distances from the mainshock have been observed to closely follow the arrival of ~0.03-0.6 Hz surface waves (Hill, 2008). Triggering by body waves at these distances is generally not observed. At distances closer than 50-100 km, however, surface waves are not well developed and have minimal amplitude. Thus triggering at these distances is presumably accomplished by static stress change and/or by body waves via a mechanism that does not work at further distances. Pollitz (2006) demonstrated that slow slip events on the San Andreas fault do not trigger many aftershocks, suggesting that static stresses alone are not effective triggers, while Felzer and Brodsky (2006) demonstrated that dynamic stresses alone do appear to trigger aftershocks at least in the 10--50 km range. Yet Parsons and Velasco (2009) found that underground nuclear tests, which are essentially dynamic-only sources, do not produce aftershocks at regional distances. Here we demonstrate that Southern California quarry blasts also fail to produce aftershocks. Both nuclear tests and quarry blasts are depleted in high frequency energy in comparison to tectonic earthquakes (Su et al. 1991; Allman et al. 2008). Therefore the observation that both slow slip events and blasts fail to trigger many aftershocks suggests that the missing ingredient of high frequency body wave energy plays a critical role in the triggering process. Quarry blast spectra data and scaling considerations allow the critical triggering frequency to be constrained to > 20-60 Hz. Energy in this frequency band may be expected to persist at depth at least out to 100 km (Leary, 1995). Huc and Main (2003) found that aftershock triggering by global earthquakes follows a continuous decay curve out to ~150 km, suggesting that triggering by high frequency body waves might extend this far. At much further distances the high frequencies are likely attenuated, explaining why only low frequency surface wave triggering
NASA Astrophysics Data System (ADS)
Kato, Keiko; Oguri, Katsuya; Sanada, Haruki; Tawara, Takehiko; Sogawa, Tetsuomi; Gotoh, Hideki
2015-09-01
We determine phonon decay rate by measuring the temperature dependence of coherent phonons in p-type Si under Fano resonance, where there is interference between the continuum and discrete states. As the temperature decreases, the decay rate of coherent phonons decreases, whereas that evaluated from the Raman linewidth increases. The former follows the anharmonic decay model, whereas the latter does not. The different temperature dependences of the phonon decay rate of the two methods originate from the way that the continuum state, which originates from the Fano resonance, modifies the time- and frequency-domain spectra. The observation of coherent phonons is useful for evaluating the phonon decay rate free from the interaction with the continuum state and clarifies that the anharmonic decay is dominant in p-type Si even under Fano resonance.
Preliminary Double-Difference Relocations of Bhuj Aftershocks
NASA Astrophysics Data System (ADS)
Raphael, A. J.; Bodin, P.; Horton, S.; Gomberg, J.
2001-12-01
The Mw=7.7 Bhuj earthquake of 26 January, 2001 in Gujarat, India, was a scientifically important earthquake that took place in a rather poorly instrumented region. Lack of nearby mainshock recordings and lack of surface rupture preclude the calculation of a high-resolution picture of the mainshock rupture processes like those presented for other recent large, better instrumented earthquakes. This is particularly vexing because, given its history of infrequent moderate-to-large earthquakes and its setting within a continental plate interior, the Bhuj earthquake might provide important insights for other high-consequence-but-low-occurrence-rate regions such as the central US. Fortunately we do have excellent recordings of numerous aftershocks on a temporary network of 8 portable seismographs. In order to constrain rupture complexity, we are computing high-resolution relative relocations of aftershocks using HypoDD, the double-difference algorithm of Waldhauser and Ellsworth \\(BSSA, 2000\\) to look for aftershock patterns that may reflect rupture characteristics. We are currently using a subset of all of the aftershocks that have been analyzed \\(P and S phases recorded on at least 4 stations\\) which consists of nearly 1000 events. This subset is less than half of all the data, and more events are being added as they are analyzed. Our preliminary results show concentrated patches of relocated aftershocks that dip to the south between 6 and 37 km deep. Strong clusters appear to illuminate the lateral edges of a rupture, with a NE trending cluster at the eastern side and a NW trending cluster at the western side, both plunging southward. The central part of the apparent rupture, which coincides with teleseismic estimates of maximum slip, appears to be relatively quiescent. We have not up to this point used waveform cross-correlation to provide relative arrival timing, but feel this may be appropriate for subsets of the overall data set. We also note the presence of
NASA Astrophysics Data System (ADS)
Perfettini, H.; Avouac, J.-P.; Ruegg, J.-C.
2005-09-01
We analyzed aftershocks and postseismic deformation recorded by the continuous GPS station AREQ following the Mw = 8.4, 23 June 2001 Peru earthquake. This station moved by 50 cm trenchward, in a N235°E direction during the coseismic phase, and continued to move in the same direction for an additional 15 cm over the next 2 years. We compare observations with the prediction of a simple one-dimensional (1-D) system of springs, sliders, and dashpot loaded by a constant force, meant to simulate stress transfer during the seismic cycle. The model incorporates a seismogenic fault zone, obeying rate-weakening friction, a zone of deep afterslip, the brittle creep fault zone (BCFZ) obeying rate-strengthening friction, and a zone of viscous flow at depth, the ductile fault zone (DFZ). This simple model captures the main features of the temporal evolution of seismicity and deformation. Our results imply that crustal strain associated with stress accumulation during the interseismic period is probably not stationary over most of the interseismic period. The BCFZ appears to control the early postseismic response (afterslip and aftershocks), although an immediate increase, by a factor of about 1.77, of ductile shear rate is required, placing constraints on the effective viscosity of the DFZ. Following a large subduction earthquake, displacement of inland sites is trenchward in the early phase of the seismic cycle and reverse to landward after a time ti for which an analytical expression is given. This study adds support to the view that the decay rate of aftershocks may be controlled by reloading due to deep afterslip. Given the ratio of preseismic to postseismic viscous creep, we deduce that frictional stresses along the subduction interface account for probably 70% of the force transmitted along the plate interface.
The role of wall confinement on the decay rate of an initially isotropic turbulent field
NASA Astrophysics Data System (ADS)
Dowling, David R.; Movahed, Pooya; Johnsen, Eric
2014-11-01
The problem of freely decaying isotropic turbulence has been the subject of intensive research during the past few decades due to its importance for modeling purposes. While isotropy and periodic boundary conditions assumptions simplify the analysis, large-scale anisotropy (e.g., caused by rotation, shear, acceleration or walls) is in practice present in most turbulent flows and affects flow dynamics across different scales, as well as the kinetic energy decay. We investigate the role of wall confinement and viscous dissipation on the decay rate of an initially isotropic field for confining volumes of different aspect ratios. We first generate an isotropic velocity field in a cube with periodic boundary conditions. Next, using this field, we change the boundary conditions to no-slip walls on all sides. These walls restrict the initial field to a confined geometry and also provide an additional viscous dissipation mechanism. The problem is considered for confining volumes of different aspect ratios by adjusting the initial field. The change in confining volume introduces an additional length scale to the problem. Direct numerical simulation of the proposed set-up is used to verify the scaling arguments for the decay rate of kinetic energy. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation Grant Number ACI-1053575.
NASA Astrophysics Data System (ADS)
Grobbelaar-Van Dalsen, Marié
2015-02-01
In this article, we are concerned with the polynomial stabilization of a two-dimensional thermoelastic Mindlin-Timoshenko plate model with no mechanical damping. The model is subject to Dirichlet boundary conditions on the elastic as well as the thermal variables. The work complements our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 64:1305-1325, 2013) on the polynomial stabilization of a Mindlin-Timoshenko model in a radially symmetric domain under Dirichlet boundary conditions on the displacement and thermal variables and free boundary conditions on the shear angle variables. In particular, our aim is to investigate the effect of the Dirichlet boundary conditions on all the variables on the polynomial decay rate of the model. By once more applying a frequency domain method in which we make critical use of an inequality for the trace of Sobolev functions on the boundary of a bounded, open connected set we show that the decay is slower than in the model considered in the cited work. A comparison of our result with our polynomial decay result for a magnetoelastic Mindlin-Timoshenko model subject to Dirichlet boundary conditions on the elastic variables in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) also indicates a correlation between the robustness of the coupling between parabolic and hyperbolic dynamics and the polynomial decay rate in the two models.
Triggering of Aftershocks by Free Oscillations
NASA Astrophysics Data System (ADS)
Bufe, C. G.; Varnes, D. J.
2001-12-01
Periodicities observed in aftershock sequences may result from earthquake triggering by free oscillations of the Earth produced by the main shock. Using an algorithm we developed to compute spectra of inter-event times, we examine inter-event intervals of teleseismically recorded aftershock sequences from large (M>7.5) main shocks that occurred during 1980-2001. Observed periodicities may result from triggering at intervals that are multiples of normal mode periods. We have focussed our analysis of inter-event times on identification of triggering by free oscillations at periods in the range 6-60 minutes. In this paper we describe our most commonly observed aftershock inter-event times and the free oscillation modes most likely to be the triggers. Because of their separation, the longer period modes are easiest to identify in the aftershock data (0S2 at 53.9 minutes, 0S3 at 35.6 minutes, 0S4 at 25.8 minutes, and 0T2 at 43.9 minutes). Evidence of triggering by 0S2 and 0T2 was also found in the aftershocks of the 1989 Loma Prieta, CA (M 7) earthquake (Kamal and Mansinha, 1996). Because of the plethora of higher modes, shorter inter-event periods are more difficult to identify with a particular mode. Preliminary analysis of the 2001 Bhuj, India (M 7.7) earthquake sequence tentatively identifies a contribution to triggering of the first four large aftershocks by multiples of 0S12 (8.37 minutes).
Analysing the 1811-1812 New Madrid earthquakes with recent instrumentally recorded aftershocks
Mueller, K.; Hough, S.E.; Bilham, R.
2004-01-01
Although dynamic stress changes associated with the passage of seismic waves are thought to trigger earthquakes at great distances, more than 60 per cent of all aftershocks appear to be triggered by static stress changes within two rupture lengths of a mainshock. The observed distribution of aftershocks may thus be used to infer details of mainshock rupture geometry. Aftershocks following large mid-continental earthquakes, where background stressing rates are low, are known to persist for centuries, and models based on rate-and-state friction laws provide theoretical support for this inference. Most past studies of the New Madrid earthquake sequence have indeed assumed ongoing microseismicity to be a continuing aftershock sequence. Here we use instrumentally recorded aftershock locations and models of elastic stress change to develop a kinematically consistent rupture scenario for three of the four largest earthquakes of the 1811-1812 New Madrid sequence. Our results suggest that these three events occurred on two contiguous faults, producing lobes of increased stress near fault intersections and end points, in areas where present-day microearthquakes have been hitherto interpreted as evidence of primary mainshock rupture. We infer that the remaining New Madrid mainshock may have occurred more than 200 km north of this region in the Wabash Valley of southern Indiana and Illinois-an area that contains abundant modern microseismicity, and where substantial liquefaction was documented by historic accounts. Our results suggest that future large midplate earthquake sequences may extend over a much broader region than previously suspected.
Analysing the 1811-1812 New Madrid earthquakes with recent instrumentally recorded aftershocks.
Mueller, Karl; Hough, Susan E; Bilham, Roger
2004-05-20
Although dynamic stress changes associated with the passage of seismic waves are thought to trigger earthquakes at great distances, more than 60 per cent of all aftershocks appear to be triggered by static stress changes within two rupture lengths of a mainshock. The observed distribution of aftershocks may thus be used to infer details of mainshock rupture geometry. Aftershocks following large mid-continental earthquakes, where background stressing rates are low, are known to persist for centuries, and models based on rate-and-state friction laws provide theoretical support for this inference. Most past studies of the New Madrid earthquake sequence have indeed assumed ongoing microseismicity to be a continuing aftershock sequence. Here we use instrumentally recorded aftershock locations and models of elastic stress change to develop a kinematically consistent rupture scenario for three of the four largest earthquakes of the 1811-1812 New Madrid sequence. Our results suggest that these three events occurred on two contiguous faults, producing lobes of increased stress near fault intersections and end points, in areas where present-day microearthquakes have been hitherto interpreted as evidence of primary mainshock rupture. We infer that the remaining New Madrid mainshock may have occurred more than 200 km north of this region in the Wabash Valley of southern Indiana and Illinois--an area that contains abundant modern microseismicity, and where substantial liquefaction was documented by historic accounts. Our results suggest that future large mid-plate earthquake sequences may extend over a much broader region than previously suspected. PMID:15152249
Real-time forecast of aftershocks from a single seismic station signal
NASA Astrophysics Data System (ADS)
Lippiello, E.; Cirillo, A.; Godano, G.; Papadimitriou, E.; Karakostas, V.
2016-06-01
The evaluation of seismic hazard in the hours following large earthquakes is strongly affected by biases due to difficulties in determining earthquake location. This leads to the huge incompleteness of instrumental catalogs. Here we show that if, on the one hand, the overlap of aftershock coda waves hides many small events, on the other hand, it leads to a well-determined empirical law controlling the decay of the amplitude of the seismic signal at a given site. The fitting parameters of this law can be related to those controlling the temporal decay of the aftershock number, and it is then possible to obtain short-term postseismic occurrence probability from a single recorded seismic signal. We therefore present a novel procedure which, without requiring earthquake location, produces more accurate and almost real-time forecast, in a site of interest, directly from the signal of a seismic station installed at that site.
Processing Aftershock Sequences Using Waveform Correlation
NASA Astrophysics Data System (ADS)
Resor, M. E.; Procopio, M. J.; Young, C. J.; Carr, D. B.
2008-12-01
For most event monitoring systems, the objective is to keep up with the flow of incoming data, producing a bulletin with some modest, relatively constant, time delay after present time, often a period of a few hours or less. Because the association problem scales exponentially and not linearly with the number of detections, a dramatic increase in seismicity due to an aftershock sequence can easily cause the bulletin delay time to increase dramatically. In some cases, the production of a bulletin may cease altogether, until the automatic system can catch up. For a nuclear monitoring system, the implications of such a delay could be dire. Given the expected similarity between a mainshock and aftershocks, it has been proposed that waveform correlation may provide a powerful means to simultaneously increase the efficiency of processing aftershock sequences, while also lowering the detection threshold and improving the quality of the event solutions. However, many questions remain unanswered. What are the key parameters for achieving the best correlations between waveforms (window length, filtering, etc.), and are they sequence-dependent? What is the overall percentage of similar events in an aftershock sequence, i.e. what is the maximum level of efficiency that a waveform correlation could be expected to achieve? Finally, how does this percentage of events vary among sequences? Using data from the aftershock sequence for the December 26, 2004 Mw 9.1 Sumatra event, we investigate these issues by building and testing a prototype waveform correlation event detection system that automatically expands its library of known events as new signatures are indentified in the aftershock sequence (by traditional signal detection and event processing). Our system tests all incoming data against this dynamic library, thereby identify any similar events before traditional processing takes place. In the region surrounding the Sumatra event, the NEIC EDR contains 4997 events in the 9
A Self-Organized Model for Cell-Differentiation Based on Variations of Molecular Decay Rates
Hanel, Rudolf; Pöchacker, Manfred; Schölling, Manuel; Thurner, Stefan
2012-01-01
Systemic properties of living cells are the result of molecular dynamics governed by so-called genetic regulatory networks (GRN). These networks capture all possible features of cells and are responsible for the immense levels of adaptation characteristic to living systems. At any point in time only small subsets of these networks are active. Any active subset of the GRN leads to the expression of particular sets of molecules (expression modes). The subsets of active networks change over time, leading to the observed complex dynamics of expression patterns. Understanding of these dynamics becomes increasingly important in systems biology and medicine. While the importance of transcription rates and catalytic interactions has been widely recognized in modeling genetic regulatory systems, the understanding of the role of degradation of biochemical agents (mRNA, protein) in regulatory dynamics remains limited. Recent experimental data suggests that there exists a functional relation between mRNA and protein decay rates and expression modes. In this paper we propose a model for the dynamics of successions of sequences of active subnetworks of the GRN. The model is able to reproduce key characteristics of molecular dynamics, including homeostasis, multi-stability, periodic dynamics, alternating activity, differentiability, and self-organized critical dynamics. Moreover the model allows to naturally understand the mechanism behind the relation between decay rates and expression modes. The model explains recent experimental observations that decay-rates (or turnovers) vary between differentiated tissue-classes at a general systemic level and highlights the role of intracellular decay rate control mechanisms in cell differentiation. PMID:22693554
A self-organized model for cell-differentiation based on variations of molecular decay rates.
Hanel, Rudolf; Pöchacker, Manfred; Schölling, Manuel; Thurner, Stefan
2012-01-01
Systemic properties of living cells are the result of molecular dynamics governed by so-called genetic regulatory networks (GRN). These networks capture all possible features of cells and are responsible for the immense levels of adaptation characteristic to living systems. At any point in time only small subsets of these networks are active. Any active subset of the GRN leads to the expression of particular sets of molecules (expression modes). The subsets of active networks change over time, leading to the observed complex dynamics of expression patterns. Understanding of these dynamics becomes increasingly important in systems biology and medicine. While the importance of transcription rates and catalytic interactions has been widely recognized in modeling genetic regulatory systems, the understanding of the role of degradation of biochemical agents (mRNA, protein) in regulatory dynamics remains limited. Recent experimental data suggests that there exists a functional relation between mRNA and protein decay rates and expression modes. In this paper we propose a model for the dynamics of successions of sequences of active subnetworks of the GRN. The model is able to reproduce key characteristics of molecular dynamics, including homeostasis, multi-stability, periodic dynamics, alternating activity, differentiability, and self-organized critical dynamics. Moreover the model allows to naturally understand the mechanism behind the relation between decay rates and expression modes. The model explains recent experimental observations that decay-rates (or turnovers) vary between differentiated tissue-classes at a general systemic level and highlights the role of intracellular decay rate control mechanisms in cell differentiation. PMID:22693554
NASA Astrophysics Data System (ADS)
Hazra, Gopal; Karak, Bidya Binay; Banerjee, Dipankar; Choudhuri, Arnab Rai
2015-06-01
Using different proxies of solar activity, we have studied the following features of the solar cycle: i) The linear correlation between the amplitude of cycle and its decay rate, ii) the linear correlation between the amplitude of cycle and the decay rate of cycle , and iii) the anti-correlation between the amplitude of cycle and the period of cycle . Features ii) and iii) are very useful because they provide precursors for future cycles. We have reproduced these features using a flux-transport dynamo model with stochastic fluctuations in the Babcock-Leighton effect and in the meridional circulation. Only when we introduce fluctuations in meridional circulation, are we able to reproduce different observed features of the solar cycle. We discuss the possible reasons for these correlations.
Optimal decay rates of classical solutions for the full compressible MHD equations
NASA Astrophysics Data System (ADS)
Gao, Jincheng; Tao, Qiang; Yao, Zheng-an
2016-04-01
In this paper, we are concerned with optimal decay rates for higher-order spatial derivatives of classical solutions to the full compressible MHD equations in three-dimensional whole space. If the initial perturbation is small in {H^3}-norm and bounded in {L^q(qin [1, 6/5 ))}-norm, we apply the Fourier splitting method by Schonbek (Arch Ration Mech Anal 88:209-222, 1985) to establish optimal decay rates for the second-order spatial derivatives of solutions and the third-order spatial derivatives of magnetic field in {L^2}-norm. These results improve the work of Pu and Guo (Z Angew Math Phys 64:519-538, 2013).
General decay rate estimates for viscoelastic wave equation with Balakrishnan-Taylor damping
NASA Astrophysics Data System (ADS)
Ha, Tae Gab
2016-04-01
In this paper, we consider the viscoelastic wave equation with Balakrishnan-Taylor damping. This work is devoted to prove uniform decay rates of the energy without imposing any restrictive growth assumption on the damping term and weakening the usual assumptions on the relaxation function. Our estimate depends both on the behavior of the damping term near zero and on behavior of the relaxation function at infinity.
Absorption cross-section and decay rate of rotating linear dilaton black holes
NASA Astrophysics Data System (ADS)
Sakalli, I.; Aslan, O. A.
2016-02-01
We analytically study the scalar perturbation of non-asymptotically flat (NAF) rotating linear dilaton black holes (RLDBHs) in 4-dimensions. We show that both radial and angular wave equations can be solved in terms of the hypergeometric functions. The exact greybody factor (GF), the absorption cross-section (ACS), and the decay rate (DR) for the massless scalar waves are computed for these black holes (BHs). The results obtained for ACS and DR are discussed through graphs.
Kuzyakin, R. A.; Sargsyan, V. V.; Adamian, G. G.; Antonenko, N. V.
2011-06-15
With the quantum diffusion approach, the passing probability through the parabolic barrier is examined in the limit of linear coupling in the momentum between the collective subsystem and environment. The dependencies of the penetrability on time, energy, and the coupling strength between the interacting subsystems are studied. The quasistationary thermal decay rate from a metastable state is considered in the cases of linear couplings both in the momentum and in the coordinate.
Friedberg, Richard; Manassah, Jamal T.
2011-08-15
We obtain in both the scalar and vector photon models the analytical expressions for the initial cooperative decay rate and the cooperative Lamb shift for an ensemble of resonant atoms distributed uniformly in an infinite cylindrical geometry for the case that the initial state of the system is prepared in a phased state modulated in the direction of the cylindrical axis. We find that qualitatively the scalar and vector theories give different results.
31Cl beta decay and the 30P31S reaction rate in nova nucleosynthesis
NASA Astrophysics Data System (ADS)
Bennett, Michael; Wrede, C.; Brown, B. A.; Liddick, S. N.; Pérez-Loureiro, D.; NSCL e12028 Collaboration
2016-03-01
The 30P31S reaction rate is critical for modeling the final isotopic abundances of ONe nova nucleosynthesis, identifying the origin of presolar nova grains, and calibrating proposed nova thermometers. Unfortunately, this rate is essentially experimentally unconstrained because the strengths of key 31S proton capture resonances are not known, due to uncertainties in their spins and parities. Using a 31Cl beam produced at the National Superconducting Cyclotron Laboratory, we have populated several 31S states for study via beta decay and devised a new decay scheme which includes updated beta feedings and gamma branchings as well as multiple states previously unobserved in 31Cl beta decay. Results of this study, including the unambiguous identification due to isospin mixing of a new l = 0 , Jπ = 3 /2+ 31S resonance directly in the middle of the Gamow Window, will be presented, and significance to the evaluation of the 30P31S reaction rate will be discussed. Work supported by U.S. Natl. Sci. Foundation (Grants No. PHY-1102511, PHY-1404442, PHY-1419765, and PHY-1431052); U.S. Dept. of Energy, Natl. Nucl. Security Administration (Award No. DE-NA0000979); Nat. Sci. and Eng. Research Council of Canada.
Radiative decay rate of excitons in square quantum wells: Microscopic modeling and experiment
NASA Astrophysics Data System (ADS)
Khramtsov, E. S.; Belov, P. A.; Grigoryev, P. S.; Ignatiev, I. V.; Verbin, S. Yu.; Efimov, Yu. P.; Eliseev, S. A.; Lovtcius, V. A.; Petrov, V. V.; Yakovlev, S. L.
2016-05-01
The binding energy and the corresponding wave function of excitons in GaAs-based finite square quantum wells (QWs) are calculated by the direct numerical solution of the three-dimensional Schrödinger equation. The precise results for the lowest exciton state are obtained by the Hamiltonian discretization using the high-order finite-difference scheme. The microscopic calculations are compared with the results obtained by the standard variational approach. The exciton binding energies found by two methods coincide within 0.1 meV for the wide range of QW widths. The radiative decay rate is calculated for QWs of various widths using the exciton wave functions obtained by direct and variational methods. The radiative decay rates are confronted with the experimental data measured for high-quality GaAs/AlGaAs and InGaAs/GaAs QW heterostructures grown by molecular beam epitaxy. The calculated and measured values are in good agreement, though slight differences with earlier calculations of the radiative decay rate are observed.
Direct Measurement of the Unimolecular Decay Rate of Criegee Intermediates to OH Products
NASA Astrophysics Data System (ADS)
Liu, Fang; Fang, Yi; Klippenstein, Stephen; McCoy, Anne; Lester, Marsha
Ozonolysis of alkenes is an important non-photolytic source of OH radicals in the troposphere. The production of OH radicals proceeds though formation and unimolecular decay of Criegee intermediates such as syn-CH3CHOO and (CH3)2COO. These alkyl-substituted Criegee intermediates can undergo a 1,4-H transfer reaction to form an energized vinyl hydroperoxide species, which breaks apart to OH and vinoxy products. Recently, this laboratory used IR excitation in the C-H stretch overtone region to initiate the unimolecular decay of syn-CH3CHOO and (CH3)2COO Criegee intermediates, leading to OH formation. Here, direct time-domain measurements are performed to observe the rate of appearance of OH products under collision-free conditions utilizing UV laser-induced fluorescence for detection. The experimental rates are in excellent agreement with statistical RRKM calculations using barrier heights predicted from high-level electronic structure calculations. Accurate determination of the rates and barrier heights for unimolecular decay of Criegee intermediates is essential for modeling the kinetics of alkene ozonolysis reactions, a significant OH radical source in atmospheric chemistry, as well as the steady-state concentration of Criegee intermediates in the atmosphere. This research was supported through the National Science Foundation under grant CHE-1362835.
Photonic effects on the radiative decay rate and luminescence quantum yield of doped nanocrystals.
Senden, Tim; Rabouw, Freddy T; Meijerink, Andries
2015-02-24
Nanocrystals (NCs) doped with luminescent ions form an emerging class of materials. In contrast to excitonic transitions in semiconductor NCs, the optical transitions are localized and not affected by quantum confinement. The radiative decay rates of the dopant emission in NCs are nevertheless different from their bulk analogues due to photonic effects, and also the luminescence quantum yield (QY, important for applications) is affected. In the past, different theoretical models have been proposed to describe the photonic effects for dopant emission in NCs, with little experimental validation. In this work we investigate the photonic effects on the radiative decay rate of luminescent doped NCs using 4 nm LaPO4 NCs doped with Ce(3+) or Tb(3+) ions in different refractive index solvents and bulk crystals. We demonstrate that the measured influence of the refractive index on the radiative decay rate of the Ce(3+) emission, having near unity QY, is in excellent agreement with the theoretical nanocrystal-cavity model. Furthermore, we show how the nanocrystal-cavity model can be used to quantify the nonunity QY of Tb(3+)-doped LaPO4 NCs and demonstrate that, as a general rule, the QY is higher in media with higher refractive index. PMID:25584627
CONCERNING THE PHASES OF THE ANNUAL VARIATIONS OF NUCLEAR DECAY RATES
Sturrock, P. A.; Buncher, J. B.; Fischbach, E.; Jenkins, J. H.; Mattes, J. J.; Javorsek, D. II
2011-08-20
Recent analyses of data sets acquired at the Brookhaven National Laboratory and at the Physikalisch-Technische Bundesanstalt both show evidence of pronounced annual variations, suggestive of a solar influence. However, the phases of decay-rate maxima do not correspond precisely to the phase of minimum Sun-Earth distance, as might then be expected. We here examine the hypothesis that decay rates are influenced by an unknown solar radiation, but that the intensity of the radiation is influenced not only by the variation in Sun-Earth distance, but also by a possible north-south asymmetry in the solar emission mechanism. We find that this can lead to phases of decay-rate maxima in the range 0-0.183 or 0.683-1 (September 6 to March 8) but that, according to this hypothesis, phases in the range of 0.183-0.683 (March 8 to September 6) are 'forbidden'. We find that phases of the three data sets analyzed here fall in the allowed range.
β -decay rates of Cs-131121 in the microscopic interacting boson-fermion model
NASA Astrophysics Data System (ADS)
Mardones, E.; Barea, J.; Alonso, C. E.; Arias, J. M.
2016-03-01
β -decay rates of Cs-131121 have been calculated in the framework of the neutron-proton interacting boson-fermion model (IBFM-2). For odd-A nuclei, the decay operator can be written in a relatively simple form in terms of the one-nucleon transfer operator. Previous studies of β decay in IBFM-2 were based on a transfer operator obtained by using the number operator approximation (NOA). In this work a new form of the one-nucleon transfer operator, derived microscopically without the NOA approximation, is used. The results from both approaches are compared and show that the deviation from experimental data is reduced without using the NOA approximation. Indications about the renormalization of the Fermi and Gamow-Teller matrix elements are discussed. This is a further step toward a more complete description of low-lying states in medium and heavy nuclei which is necessary to compute reliable matrix elements in studies of current active interest such as double-β decay or neutrino absorption experiments.
Generalized Omori-Utsu law for aftershock sequences in southern California
NASA Astrophysics Data System (ADS)
Davidsen, J.; Gu, C.; Baiesi, M.
2015-05-01
We investigate the validity of a proposed generalized Omori-Utsu law for the aftershock sequences for the Landers, Hector Mine, Northridge and Superstition Hills earthquakes, the four largest events in the southern California catalogue we analyse. This law unifies three of the most prominent empirical laws of statistical seismology-the Gutenberg-Richter law, the Omori-Utsu law, and a generalized version of Båth's law-in a formula casting the parameters in the Omori-Utsu law as a function of the lower magnitude cutoff mc for the aftershocks considered. By applying a recently established general procedure for identifying aftershocks, we confirm that the generalized Omori-Utsu law provides a good approximation for the observed rates overall. In particular, we provide convincing evidence that the characteristic time c is not constant but a genuine function of mc, which cannot be attributed to short-term aftershock incompleteness. However, the estimation of the specific parameters is somewhat sensitive to the aftershock selection method used. This includes c(mc), which has important implications for inferring the underlying stress field.
Pond, R.B.; Matos, J.E.
1996-12-31
This document has been prepared to assist research reactor operators possessing spent fuel containing enriched uranium of United States origin to prepare part of the documentation necessary to ship this fuel to the United States. Data are included on the nuclear mass inventory, photon dose rate, and thermal decay heat of spent research reactor fuel assemblies. Isotopic masses of U, Np, Pu and Am that are present in spent research reactor fuel are estimated for MTR, TRIGA and DIDO-type fuel assembly types. The isotopic masses of each fuel assembly type are given as functions of U-235 burnup in the spent fuel, and of initial U-235 enrichment and U-235 mass in the fuel assembly. Photon dose rates of spent MTR, TRIGA and DIDO-type fuel assemblies are estimated for fuel assemblies with up to 80% U-235 burnup and specific power densities between 0.089 and 2.857 MW/kg[sup 235]U, and for fission product decay times of up to 20 years. Thermal decay heat loads are estimated for spent fuel based upon the fuel assembly irradiation history (average assembly power vs. elapsed time) and the spent fuel cooling time.
Sterbentz, J.W.
1997-03-01
Parametric burnup calculations are performed to estimate radionuclide isotopic mass and activity concentrations for four different Training, Research, and Isotope General Atomics (TRIGA) nuclear reactor fuel element types: (1) Aluminum-clad standard, (2) Stainless Steel-clad standard, (3) High-enrichment Fuel Life Improvement Program (FLIP), and (4) Low-enrichment Fuel Life Improvement Program (FLIP-LEU-1). Parametric activity data are tabulated for 145 important radionuclides that can be used to generate gamma-ray emission source terms or provide mass quantity estimates as a function of decay time. Fuel element decay heats and dose rates are also presented parametrically as a function of burnup and decay time. Dose rates are given at the fuel element midplane for contact, 3.0-feet, and 3.0-meter detector locations in air. The data herein are estimates based on specially derived Beginning-of-Life (BOL) neutron cross sections using geometrically-explicit TRIGA reactor core models. The calculated parametric data should represent good estimates relative to actual values, although no experimental data were available for direct comparison and validation. However, because the cross sections were not updated as a function of burnup, the actinide concentrations may deviate from the actual values at the higher burnups.
Instrument for precision long-term β-decay rate measurements.
Ware, M J; Bergeson, S D; Ellsworth, J E; Groesbeck, M; Hansen, J E; Pace, D; Peatross, J
2015-07-01
We describe an experimental setup for making precision measurements of relative β-decay rates of (22)Na, (36)Cl, (54)Mn, (60)Co, (90)Sr, (133)Ba, (137)Cs, (152)Eu, and (154)Eu. The radioactive samples are mounted in two automated sample changers that sequentially position the samples with high spatial precision in front of sets of detectors. The set of detectors for one sample changer consists of four Geiger-Müller (GM) tubes and the other set of detectors consists of two NaI scintillators. The statistical uncertainty in the count rate is few times 0.01% per day for the GM detectors and about 0.01% per hour on the NaI detectors. The sample changers, detectors, and associated electronics are housed in a sealed chamber held at constant absolute pressure, humidity, and temperature to isolate the experiment from environmental variations. The apparatus is designed to accumulate statistics over many years in a regulated environment to test recent claims of small annual variations in the decay rates. We demonstrate that absent this environmental regulation, uncontrolled natural atmospheric pressure variations at our location would imprint an annual signal of 0.1% on the Geiger-Müller count rate. However, neither natural pressure variations nor plausible indoor room temperature variations cause a discernible influence on our NaI scintillator detector count rate. PMID:26233381
Instrument for precision long-term β-decay rate measurements
Ware, M. J. Bergeson, S. D.; Ellsworth, J. E.; Groesbeck, M.; Hansen, J. E.; Pace, D.; Peatross, J.
2015-07-15
We describe an experimental setup for making precision measurements of relative β-decay rates of {sup 22}Na, {sup 36}Cl, {sup 54}Mn, {sup 60}Co, {sup 90}Sr, {sup 133}Ba, {sup 137}Cs, {sup 152}Eu, and {sup 154}Eu. The radioactive samples are mounted in two automated sample changers that sequentially position the samples with high spatial precision in front of sets of detectors. The set of detectors for one sample changer consists of four Geiger-Müller (GM) tubes and the other set of detectors consists of two NaI scintillators. The statistical uncertainty in the count rate is few times 0.01% per day for the GM detectors and about 0.01% per hour on the NaI detectors. The sample changers, detectors, and associated electronics are housed in a sealed chamber held at constant absolute pressure, humidity, and temperature to isolate the experiment from environmental variations. The apparatus is designed to accumulate statistics over many years in a regulated environment to test recent claims of small annual variations in the decay rates. We demonstrate that absent this environmental regulation, uncontrolled natural atmospheric pressure variations at our location would imprint an annual signal of 0.1% on the Geiger-Müller count rate. However, neither natural pressure variations nor plausible indoor room temperature variations cause a discernible influence on our NaI scintillator detector count rate.
Instrument for precision long-term β-decay rate measurements
NASA Astrophysics Data System (ADS)
Ware, M. J.; Bergeson, S. D.; Ellsworth, J. E.; Groesbeck, M.; Hansen, J. E.; Pace, D.; Peatross, J.
2015-07-01
We describe an experimental setup for making precision measurements of relative β-decay rates of 22Na, 36Cl, 54Mn, 60Co, 90Sr, 133Ba, 137Cs, 152Eu, and 154Eu. The radioactive samples are mounted in two automated sample changers that sequentially position the samples with high spatial precision in front of sets of detectors. The set of detectors for one sample changer consists of four Geiger-Müller (GM) tubes and the other set of detectors consists of two NaI scintillators. The statistical uncertainty in the count rate is few times 0.01% per day for the GM detectors and about 0.01% per hour on the NaI detectors. The sample changers, detectors, and associated electronics are housed in a sealed chamber held at constant absolute pressure, humidity, and temperature to isolate the experiment from environmental variations. The apparatus is designed to accumulate statistics over many years in a regulated environment to test recent claims of small annual variations in the decay rates. We demonstrate that absent this environmental regulation, uncontrolled natural atmospheric pressure variations at our location would imprint an annual signal of 0.1% on the Geiger-Müller count rate. However, neither natural pressure variations nor plausible indoor room temperature variations cause a discernible influence on our NaI scintillator detector count rate.
Analysis of Growth and Decay Rates of the Axial Dipole in Geodynamo Models
NASA Astrophysics Data System (ADS)
Avery, M. S.; Constable, C.; Davies, C.; Gubbins, D.
2013-12-01
Observations of the Earth's magnetic field made at the surface reveal temporal variations in the field originating in the outer core. PADM2M is a reconstruction of the 0 to 2 Ma paleomagnetic axial dipole moment. Ziegler & Constable, 2011 showed that for periods longer than 25 kyr the rate of growth of the geomagnetic dipole is greater than its decay rate. This asymmetry is not limited to times when the field is reversing; this may be indicative of a key physical process of secular variation. To investigate the possible core processes underlying this observation we have analyzed a suite of numerical dynamo simulations, specifically the temporal variation of their axial dipole moments. We use the magnetic diffusion time to scale the simulations' nondimensional time, as this is more appropriate for the periods of interest here. An advantage to analyzing simulations is that they do not suffer from the same limitations in spatial and temporal resolution as the data; however, simulations cannot yet run with Earth-like rotational rates or diffusivities. All of our simulations span multiple diffusion times. We have chosen a broad range of simulations with different reversal regimes (dipole-dominated, non-reversing; dipole-dominated, reversing; multipolar, reversing) and with different heating modes (bottom, internal, or a combination of the two). For each simulation we conduct the same analysis that was applied to PADM2M. Families of smoothed axial dipole models are constructed using penalized smoothing splines as an effective low-pass filter to see at what timescales any asymmetry exist. The first derivatives of each axial dipole record are calculated in order to examine the rates of growth and decay. The results vary with the nature of the simulations. Further analysis is needed to determine what dynamo parameters, and related physical properties, determine the relative rates of growth and decay.
Variation in radical decay rates in epoxy as a function of crosslink density
NASA Technical Reports Server (NTRS)
Kent, G. M.; Memory, J. M.; Gilbert, R. D.; Fornes, R. E.
1983-01-01
A study was made of the behavior of radicals generated by Co-60 gamma radiation in the epoxy system tetraglycidyl-4,4'-diaminodiphenyl methane (TGDDM) cured with 4,4'-diaminodiphenyl sulfone (DDS). The molar ratio of TGDDM to DDS was varied in the epoxy samples, and they were prepared under the same curing conditions to obtain various extents of crosslinking. ESR spectrometry data suggest that the rate of decay of radicals is related to inhomogeneities in the resin, with radicals in the highly crosslinked regions having long decay times. The inhomogeneities are thought to be due to statistical variation associated with the complex crosslinking reactions or to difficulties in mixing the reactants.
Measurement of the production rates of η and η‧ in hadronic Z decays
NASA Astrophysics Data System (ADS)
Buskulic, D.; Decamp, D.; Goy, C.; Lees, J.-P.; Minard, M.-N.; Mours, B.; Alemany, R.; Ariztizabal, F.; Comas, P.; Crespo, J. M.; Delfino, M.; Fernandez, E.; Gaitan, V.; Garrido, Ll.; Pacheco, A.; Pascual, A.; Creanza, D.; de Palma, M.; Farilla, A.; Iaselli, G.; Maggi, G.; Maggi, M.; Natali, S.; Nuzzo, S.; Quattromini, M.; Ranieri, A.; Raso, G.; Romano, F.; Ruggieri, F.; Selvaggi, G.; Silvestris, L.; Tempesta, P.; Zito, G.; Gao, Y.; Hu, H.; Huang, D.; Huang, X.; Lin, J.; Lou, J.; Qiao, C.; Wang, T.; Xie, Y.; Xu, D.; Xu, R.; Zhang, J.; Zhao, W.; Atwood, W. B.; Bauerdick, L. A. T.; Blucher, E.; Bonvicini, G.; Bossi, F.; Boudreau, J.; Burnett, T. H.; Drevermann, H.; Forty, R. W.; Hagelberg, R.; Harvey, J.; Haywood, S.; Hilgart, J.; Jacobsen, R.; Jost, B.; Knobloch, J.; Lançon, E.; Lehraus, I.; Lohse, T.; Lusiani, A.; Martinez, M.; Mato, P.; Mattison, T.; Meinhard, H.; Menary, S.; Meyer, T.; Minten, A.; Miguel, R.; Moser, H.-G.; Nash, J.; Palazzi, P.; Perlas, J. A.; Ranjard, F.; Redlinger, G.; Rolandi, L.; Roth, A.; Rothberg, J.; Ruan, T.; Saich, M.; Schlatter, D.; Schmelling, M.; Sefkow, F.; Tejessy, W.; Wachsmuth, H.; Wiedenmann, W.; Wildish, T.; Witzeling, W.; Wotschack, J.; Ajaltouni, Z.; Badaud, F.; Bardadin-Otwinowska, M.; Bencheikh, A. M.; El Fellous, R.; Falvard, A.; Gay, P.; Guicheney, C.; Henrard, P.; Jousset, J.; Michel, B.; Montret, J.-C.; Pallin, D.; Perret, P.; Pietrzyk, B.; Proriol, J.; Prulhière, F.; Stimpfl, G.; Fearnley, T.; Hansen, J. D.; Hansen, J. R.; Hansen, P. H.; Møllerud, R.; Nilsson, B. S.; Efthymiopoulos, I.; Kyriakis, A.; Simopoulou, E.; Vayaki, A.; Zachariadou, K.; Badier, J.; Blondel, A.; Bonneaud, G.; Brient, J. C.; Fouque, G.; Gamess, A.; Orteu, S.; Rosowsky, A.; Rougé, A.; Rumpf, M.; Tanaka, R.; Videau, H.; Candlin, D. J.; Parsons, M. I.; Veitch, E.; Moneta, L.; Parrini, G.; Corden, M.; Georgiopoulos, C.; Ikeda, M.; Lannutti, J.; Levinthal, D.; Mermikides, M.; Sawyer, L.; Wasserbaech, S.; Antonelli, A.; Baldini, R.; Bencivenni, G.; Bologna, G.; Campana, P.; Capon, G.; Cerutti, F.; Chiarella, V.; D'Ettorre-Piazzoli, B.; Felici, G.; Laurelli, P.; Mannocchi, G.; Murtas, F.; Murtas, G. P.; Passalacqua, L.; Pepe-Altarelli, M.; Picchi, P.; Altoon, B.; Boyle, O.; Colrain, P.; Ten Have, I.; Lynch, J. G.; Maitland, W.; Morton, W. T.; Raine, C.; Scarr, J. M.; Smith, K.; Thompson, A. S.; Turnbull, R. M.; Brandl, B.; Braun, O.; Geiges, R.; Geweniger, C.; Hanke, P.; Hepp, V.; Kluge, E. E.; Maumary, Y.; Putzer, A.; Rensch, B.; Stahl, A.; Tittel, K.; Wunsch, M.; Belk, A. T.; Beuselinck, R.; Binnie, D. M.; Cameron, W.; Cattaneo, M.; Colling, D. J.; Dornan, P. J.; Dugeay, S.; Greene, A. M.; Hassard, J. F.; Lieske, N. M.; Patton, S. J.; Payne, D. G.; Phillips, M. J.; Sedgbeer, J. K.; Tomalin, I. R.; Wright, A. G.; Kneringer, E.; Kuhn, D.; Rudolph, G.; Bowdery, C. K.; Brodbeck, T. J.; Finch, A. J.; Foster, F.; Hughes, G.; Jackson, D.; Keemer, N. R.; Nuttall, M.; Patel, A.; Sloan, T.; Snow, S. W.; Whelan, E. P.; Barczewski, T.; Kleinknecht, K.; Raab, J.; Renk, B.; Roehn, S.; Sander, H.-G.; Schmidt, H.; Steeg, F.; Walther, S. M.; Wolf, B.; Aubert, J.-J.; Benchouk, C.; Bernard, V.; Bonissent, A.; Carr, J.; Coyle, P.; Drinkard, J.; Etienne, F.; Papalexiou, S.; Payre, P.; Qian, Z.; Rousseau, D.; Schwemling, P.; Talby, M.; Adlung, S.; Bauer, C.; Blum, W.; Brown, D.; Cowan, G.; Dehning, B.; Dietl, H.; Dydak, F.; Fernandez-Bosman, M.; Frank, M.; Halley, A. W.; Lauber, J.; Lütjens, G.; Lutz, G.; Männer, W.; Richter, R.; Schröder, J.; Schwarz, A. S.; Settles, R.; Seywerd, H.; Stierlin, U.; Stiegler, U.; St. Denis, R.; Takashima, M.; Thomas, J.; Wolf, G.; Bertin, V.; Boucrot, J.; Callot, O.; Chen, X.; Cordier, A.; Davier, M.; Grivaz, J.-F.; Heusse, Ph.; Janot, P.; Kim, D. W.; Le Diberder, F.; Lefrançois, J.; Lutz, A.-M.; Schune, M.-H.; Veillet, J.-J.; Videau, I.; Zhang, Z.; Zomer, F.; Abbaneo, D.; Amendolia, S. R.; Bagliesi, G.; Batignani, G.; Bosisio, L.; Bottigli, U.; Bradaschia, C.; Carpinelli, M.; Ciocci, M. A.; Dell'Orso, R.; Ferrante, I.; Fidecaro, F.; Foà, L.; Focardi, E.; Forti, F.; Giassi, A.; Giorgi, M. A.; Ligabue, F.; Mannelli, E. B.; Marrocchesi, P. S.; Messineo, A.; Palla, F.; Rizzo, G.; Sanguinetti, G.; Steinberger, J.; Tenchini, R.; Tonelli, G.; Triggiani, G.; Vannini, C.; Venturi, A.; Verdini, P. G.; Walsh, J.; Carter, J. M.; Green, M. G.; March, P. V.; Mir, Ll. M.; Medcalf, T.; Quazi, I. S.; Strong, J. A.; West, L. R.; Botterill, D. R.; Clifft, R. W.; Edgecock, T. R.; Edwards, M.; Fisher, S. M.; Jones, T. J.; Norton, P. R.; Salmon, D. P.; Thompson, J. C.; Bloch-Devaux, B.; Colas, P.; Kozanecki, W.; Lemaire, M. C.; Locci, E.; Loucatos, S.; Monnier, E.; Perez, P.; Perrier, F.; Rander, J.; Renardy, J.-F.; Roussarie, A.; Schuller, J.-P.; Schwindling, J.; Si Mohand, D.; Vallage, B.; Johnson, R. P.; Litke, A. M.; Taylor, G.; Wear, J.; Ashman, J. G.; Babbage, W.; Booth, C. N.; Buttar, C.; Carney, R. E.; Cartwright, S.; Combley, F.; Hatfield, F.; Reeves, P.; Thompson, L. F.; Barberio, E.; Brandt, S.; Grupen, C.; Mirabito, L.; Schäfer, U.; Ganis, G.; Giannini, G.; Gobbo, B.; Ragusa, F.; Bellantoni, L.; Cinabro, D.; Conway, J. S.; Cowen, D. F.; Feng, Z.; Ferguson, D. P. S.; Gao, Y. S.; Grahl, J.; Harton, J. L.; Jared, R. C.; Leclaire, B. W.; Lishka, C.; Pan, Y. B.; Pater, J. R.; Pusztaszeri, J.-F.; Saadi, Y.; Sharma, V.; Schmitt, M.; Shi, Z. H.; Walsh, A. M.; Weber, F. V.; Whitney, M. H.; Wu, Sau Lan; Wu, X.; Zobernig, G.; Aleph Collaboration
1992-10-01
The decays η → γγ and η‧ → ηπ+π- have been observed in hadronic decays of the Z produced at LEP. The fragmentation functions of both the η and η‧ have been measured. The measured multiplicities for x > 0.1 are 0.298±0.023±0.021 and 0.068±0.016 for η and η‧ respectively. While the fragmentation function for the η is fairly well described by the JETSET Monte Carlo, it is found that the production rate of the η‧ is a factor of four less than the corresponding prediction.
Combined Results on b-Hadron Production Rates and Decay Properties
Su, Dong
2002-09-11
Combined results on b-hadron lifetimes, b-hadron production rates, B{sub d}{sup 0}-{bar B}{sub d}{sup 0} and B{sub s}{sup 0}-{bar B}{sub s}{sup 0} oscillations, the decay width difference between the mass eigenstates of the B{sub s}{sup 0}-{bar B}{sub s}{sup 0} system, the average number of c and {bar c} quarks in b-hadron decays, and searches for CP violation in the B{sub d}{sup 0}-{bar B}{sub d}{sup 0} system are presented. They have been obtained from published and preliminary measurements available in Summer 2000 from the ALEPH, CDF, DELPHI, L3, OPAL and SLD Collaborations. These results have been used to determine the parameters of the CKM unitarity triangle.
Optimal Decay Rate of the Compressible Navier-Stokes-Poisson System in {mathbb {R}^3}
NASA Astrophysics Data System (ADS)
Li, Hai-Liang; Matsumura, Akitaka; Zhang, Guojing
2010-05-01
The compressible Navier-Stokes-Poisson (NSP) system is considered in {mathbb {R}^3} in the present paper, and the influences of the electric field of the internal electrostatic potential force governed by the self-consistent Poisson equation on the qualitative behaviors of solutions is analyzed. It is observed that the rotating effect of electric field affects the dispersion of fluids and reduces the time decay rate of solutions. Indeed, we show that the density of the NSP system converges to its equilibrium state at the same L 2-rate {(1+t)^{-frac {3}{4}}} or L ∞-rate (1 + t)-3/2 respectively as the compressible Navier-Stokes system, but the momentum of the NSP system decays at the L 2-rate {(1+t)^{-frac {1}{4}}} or L ∞-rate (1 + t)-1 respectively, which is slower than the L 2-rate {(1+t)^{-frac {3}{4}}} or L ∞-rate (1 + t)-3/2 for compressible Navier-Stokes system [Duan et al., in Math Models Methods Appl Sci 17:737-758, 2007; Liu and Wang, in Comm Math Phys 196:145-173, 1998; Matsumura and Nishida, in J Math Kyoto Univ 20:67-104, 1980] and the L ∞-rate (1 + t)- p with {p in (1, 3/2)} for irrotational Euler-Poisson system [Guo, in Comm Math Phys 195:249-265, 1998]. These convergence rates are shown to be optimal for the compressible NSP system.
Cline, Lauren C; Zak, Donald R
2015-10-01
Priority effects are an important ecological force shaping biotic communities and ecosystem processes, in which the establishment of early colonists alters the colonization success of later-arriving organisms via competitive exclusion and habitat modification. However, we do not understand which biotic and abiotic conditions lead to strong priority effects and lasting historical contingencies. Using saprotrophic fungi in a model leaf decomposition system, we investigated whether compositional and functional consequences of initial colonization were dependent on initial colonizer traits, resource availability or a combination thereof. To test these ideas, we factorially manipulated leaf litter biochemistry and initial fungal colonist identity, quantifying subsequent community composition, using neutral genetic markers, and community functional characteristics, including enzyme potential and leaf decay rates. During the first 3 months, initial colonist respiration rate and physiological capacity to degrade plant detritus were significant determinants of fungal community composition and leaf decay, indicating that rapid growth and lignolytic potential of early colonists contributed to altered trajectories of community assembly. Further, initial colonization on oak leaves generated increasingly divergent trajectories of fungal community composition and enzyme potential, indicating stronger initial colonizer effects on energy-poor substrates. Together, these observations provide evidence that initial colonization effects, and subsequent consequences on litter decay, are dependent upon substrate biochemistry and physiological traits within a regional species pool. Because microbial decay of plant detritus is important to global C storage, our results demonstrate that understanding the mechanisms by which initial conditions alter priority effects during community assembly may be key to understanding the drivers of ecosystem-level processes. PMID:26331892
Aftershock Characteristics as a Means of Discriminating Explosions from Earthquakes
Ford, S R; Walter, W R
2009-05-20
The behavior of aftershock sequences around the Nevada Test Site in the southern Great Basin is characterized as a potential discriminant between explosions and earthquakes. The aftershock model designed by Reasenberg and Jones (1989, 1994) allows for a probabilistic statement of earthquake-like aftershock behavior at any time after the mainshock. We use this model to define two types of aftershock discriminants. The first defines M{sub X}, or the minimum magnitude of an aftershock expected within a given duration after the mainshock with probability X. Of the 67 earthquakes with M > 4 in the study region, 63 of them produce an aftershock greater than M{sub 99} within the first seven days after a mainshock. This is contrasted with only six of 93 explosions with M > 4 that produce an aftershock greater than M{sub 99} for the same period. If the aftershock magnitude threshold is lowered and the M{sub 90} criteria is used, then no explosions produce an aftershock greater than M{sub 90} for durations that end more than 17 days after the mainshock. The other discriminant defines N{sub X}, or the minimum cumulative number of aftershocks expected for given time after the mainshock with probability X. Similar to the aftershock magnitude discriminant, five earthquakes do not produce more aftershocks than N{sub 99} within 7 days after the mainshock. However, within the same period all but one explosion produce less aftershocks then N{sub 99}. One explosion is added if the duration is shortened to two days after than mainshock. The cumulative number aftershock discriminant is more reliable, especially at short durations, but requires a low magnitude of completeness for the given earthquake catalog. These results at NTS are quite promising and should be evaluated at other nuclear test sites to understand the effects of differences in the geologic setting and nuclear testing practices on its performance.
Optimal decay rate of the non-isentropic compressible Navier-Stokes-Poisson system in R
NASA Astrophysics Data System (ADS)
Zhang, Guojing; Li, Hai-Liang; Zhu, Changjiang
In this paper, the compressible non-isentropic Navier-Stokes-Poisson (NSP) system is considered in R and the influences of internal electric field on the qualitative behaviors of solutions are analyzed. We observe that the electric field leads to the rotating phenomena in charge transport and reduces the speed of fluid motion, but it does not influence the transport of charge density and the heat diffusion. Indeed, we show that both density and temperature of the NSP system converge to their equilibrium state at the same rate (1 as the non-isentropic compressible Navier-Stokes system, but the momentum decays at the rate (1, which is slower than the rate (1 for the pure compressible Navier-Stokes system. These convergence rates are also shown to be optimal for the non-isentropic compressible NSP system.
NASA Astrophysics Data System (ADS)
Metaxas, Dimitrios
2007-02-01
I show that an application of renormalization group arguments may lead to significant corrections to the vacuum decay rate for phase transitions in flat and curved space-time. It can also give some information regarding its dependence on the parameters of the theory, including the cosmological constant in the case of decay in curved space-time.
Kroll, K.; Cochran, Elizabeth S.; Richards-Dinger, K.; Sumy, Danielle
2013-01-01
We detect and precisely locate over 9500 aftershocks that occurred in the Yuha Desert region during a 2 month period following the 4 April 2010 Mw 7.2 El Mayor-Cucapah (EMC) earthquake. Events are relocated using a series of absolute and relative relocation procedures that include Hypoinverse, Velest, and hypoDD. Location errors are reduced to ~40 m horizontally and ~120 m vertically.Aftershock locations reveal a complex pattern of faulting with en echelon fault segments trending toward the northwest, approximately parallel to the North American-Pacific plate boundary and en echelon, conjugate features trending to the northeast. The relocated seismicity is highly correlated with published surface mapping of faults that experienced triggered surface slip in response to the EMC main shock. Aftershocks occurred between 2 km and 11 km depths, consistent with previous studies of seismogenic thickness in the region. Three-dimensional analysis reveals individual and intersecting fault planes that are limited in their along-strike length. These fault planes remain distinct structures at depth, indicative of conjugate faulting, and do not appear to coalesce onto a throughgoing fault segment. We observe a complex spatiotemporal migration of aftershocks, with seismicity that jumps between individual fault segments that are active for only a few days to weeks. Aftershock rates are roughly consistent with the expected earthquake production rates of Dieterich (1994). The conjugate pattern of faulting and nonuniform aftershock migration patterns suggest that strain in the Yuha Desert is being accommodated in a complex manner.
Toda, S.; Stein, R.S.; Reasenberg, P.A.; Dieterich, J.H.; Yoshida, A.
1998-01-01
The Kobe earthquake struck at the edge of the densely populated Osaka-Kyoto corridor in southwest Japan. We investigate how the earthquake transferred stress to nearby faults, altering their proximity to failure and thus changing earthquake probabilities. We find that relative to the pre-Kobe seismicity, Kobe aftershocks were concentrated in regions of calculated Coulomb stress increase and less common in regions of stress decrease. We quantify this relationship by forming the spatial correlation between the seismicity rate change and the Coulomb stress change. The correlation is significant for stress changes greater than 0.2-1.0 bars (0.02-0.1 MPa), and the nonlinear dependence of seismicity rate change on stress change is compatible with a state- and rate-dependent formulation for earthquake occurrence. We extend this analysis to future mainshocks by resolving the stress changes on major faults within 100 km of Kobe and calculating the change in probability caused by these stress changes. Transient effects of the stress changes are incorporated by the state-dependent constitutive relation, which amplifies the permanent stress changes during the aftershock period. Earthquake probability framed in this manner is highly time-dependent, much more so than is assumed in current practice. Because the probabilities depend on several poorly known parameters of the major faults, we estimate uncertainties of the probabilities by Monte Carlo simulation. This enables us to include uncertainties on the elapsed time since the last earthquake, the repeat time and its variability, and the period of aftershock decay. We estimate that a calculated 3-bar (0.3-MPa) stress increase on the eastern section of the Arima-Takatsuki Tectonic Line (ATTL) near Kyoto causes fivefold increase in the 30-year probability of a subsequent large earthquake near Kyoto; a 2-bar (0.2-MPa) stress decrease on the western section of the ATTL results in a reduction in probability by a factor of 140 to
NASA Astrophysics Data System (ADS)
Grach, Savely; Bareev, Denis; Gavrilenko, Vladimir; Sergeev, Evgeny
Damping rates of plasma waves with ω ˜ ωuh (ω is the plasma wave frequency, ωuh is the upper hybrid frequency) were calculated for frequencies close to and distant from the double resonance where ωuh ˜ nωce (ωce is the electron cyclotron frequency, n=4,5 are the gyroharmonic num-bers). The calculations were performed numerically on the base of full plasma wave dispersion relation not restricted by both the 'long wave limit' and 'short wave limit', i.e. a fulfillment of the inequalities |∆| |k |vTe and |∆| |k |vTe was not required. Here ∆ = ω - nωce , vTe = (Te /me )1/2 is the electron thermal velocity and k is the projection of the wave vector onto the magnetic field direction. It is shown that the plasma wave damping rates do not differ noticeably from ones calculated under the long wave and short wave limits. The results obtained are compared with the data of the relaxation of the stimulated electromagnetic emission (SEE) after the pump wave turn off, which demonstrate an essential decrease of the relaxation time near 4th electron gyroharmonic, so far as the SEE relaxation is attributed to the damping of plasma waves responsible for the SEE generation. The comparison allows to determine characteristics of plasma waves mostly contributing to the SEE generation, such as wave numbers and the angles between the wave vectors and geomagnetic field, and the altitude region of the SEE source. The dependence of the decay rate on ∆ can be applied also to interpretation of the SEE spectral shape at different pump frequencies near gyroharmonics. The work is supported by RFBR grants 10-02-00642, 09-02-01150 and Federal Special-purpose Program "Scientific and pedagogical personnel of innovative Russia".
The impact of sea-level rise on organic matter decay rates in Chesapeake Bay brackish tidal marshes
Kirwanm, M.L.; Langley, J.A.; Guntenspergen, Gleen R.; Megonigal, J.P.
2013-01-01
The balance between organic matter production and decay determines how fast coastal wetlands accumulate soil organic matter. Despite the importance of soil organic matter accumulation rates in influencing marsh elevation and resistance to sea-level rise, relatively little is known about how decomposition rates will respond to sea-level rise. Here, we estimate the sensitivity of decomposition to flooding by measuring rates of decay in 87 bags filled with milled sedge peat, including soil organic matter, roots and rhizomes. Experiments were located in field-based mesocosms along 3 mesohaline tributaries of the Chesapeake Bay. Mesocosm elevations were manipulated to influence the duration of tidal inundation. Although we found no significant influence of inundation on decay rate when bags from all study sites were analyzed together, decay rates at two of the sites increased with greater flooding. These findings suggest that flooding may enhance organic matter decay rates even in water-logged soils, but that the overall influence of flooding is minor. Our experiments suggest that sea-level rise will not accelerate rates of peat accumulation by slowing the rate of soil organic matter decay. Consequently, marshes will require enhanced organic matter productivity or mineral sediment deposition to survive accelerating sea-level rise.
The impact of sea-level rise on organic matter decay rates in Chesapeake Bay brackish tidal marshes
NASA Astrophysics Data System (ADS)
Kirwan, M. L.; Langley, J. A.; Guntenspergen, G. R.; Megonigal, J. P.
2013-03-01
The balance between organic matter production and decay determines how fast coastal wetlands accumulate soil organic matter. Despite the importance of soil organic matter accumulation rates in influencing marsh elevation and resistance to sea-level rise, relatively little is known about how decomposition rates will respond to sea-level rise. Here, we estimate the sensitivity of decomposition to flooding by measuring rates of decay in 87 bags filled with milled sedge peat, including soil organic matter, roots and rhizomes. Experiments were located in field-based mesocosms along 3 mesohaline tributaries of the Chesapeake Bay. Mesocosm elevations were manipulated to influence the duration of tidal inundation. Although we found no significant influence of inundation on decay rate when bags from all study sites were analyzed together, decay rates at two of the sites increased with greater flooding. These findings suggest that flooding may enhance organic matter decay rates even in water-logged soils, but that the overall influence of flooding is minor. Our experiments suggest that sea-level rise will not accelerate rates of peat accumulation by slowing the rate of soil organic matter decay. Consequently, marshes will require enhanced organic matter productivity or mineral sediment deposition to survive accelerating sea-level rise.
Schaap, L.; Leaderer, B.P.; Renes, S.; Verstraelen, H.; Tosun, T.; Dietz, R.N.
1985-01-01
The passive perfluorocarbon tracer (PFT) technique for determining air infiltration rates into homes and buildings was evaluated in an environmental chamber. The impact of sampler orientation at a constant ventilation rate and a constant temperature, of variable ventilation rate at a constant temperature, and of variable temperature at a constant ventilation rate were evaluated. The average relative standard deviation of 16 paired samplers deployed in experiment 1 was +- 1.9% +- 1.0% indicating good reproducibility of the passive sampling rate and sample analysis. No impact of sampler orientation with respect to low air velocities (<0.2 m/s) present in houses is expected. The passive samplers accurately measured the average tracer concentration as compared with calculations based on the known source strength (CO/sub 2/ decays) and the measured ventilation rate under conditions of a 3-fold variation in ventilation rates (experiment 2). Temperature cycling differences of 8/sup 0/C (experiment 3) did not produce a bias in the PFT determined ventilation rate. The PFT technique is applicable to the expected range of condition in homes and buildings. 3 refs., 1 fig., 1 tab.
NASA Technical Reports Server (NTRS)
Cole, T. W.; Frisbee, R. H.; Yavrouian, A. H.
1987-01-01
The risks posed to the NASA's Galileo spacecraft by the oxidizer flow decay during its extended mission to Jupiter is discussed. The Galileo spacecraft will use nitrogen tetroxide (NTO)/monomethyl hydrazine bipropellant system with one large engine thrust-rated at a nominal 400 N, and 12 smaller engines each thrust-rated at a nominal 10 N. These smaller thrusters, because of their small valve inlet filters and small injector ports, are especially vulnerable to clogging by iron nitrate precipitates formed by NTO-wetted stainless steel components. To quantify the corrosion rates and solubility levels which will be seen during the Galileo mission, corrosion and solubility testing experiments were performed with simulated Galileo materials, propellants, and environments. The results show the potential benefits of propellant sieving in terms of iron and water impurity reduction.
Derivative expansion and gauge independence of the false vacuum decay rate in various gauges
NASA Astrophysics Data System (ADS)
Metaxas, D.
2001-04-01
In theories with radiative symmetry breaking, the calculation of the false vacuum decay rate requires the inclusion of higher-order terms in the derivative expansion of the effective action. I show here that, in the case of covariant gauges, the presence of infrared singularities forbids the consistent calculation by keeping the lowest-order terms. The situation is remedied, however, in the case of Rξ gauges. Using the Nielsen identities I show that the final result is gauge independent for generic values of the gauge parameter v that are not anomalously small.
De Conti, C.; Barbero, C.; Galeão, A. P.; Krmpotić, F.
2014-11-11
In this work we compute the one-nucleon-induced nonmesonic hypernuclear decay rates of {sub Λ}{sup 5}He, {sub Λ}{sup 12}C and {sub Λ}{sup 13}C using a formalism based on the independent particle shell model in terms of laboratory coordinates. To ascertain the correctness and precision of the method, these results are compared with those obtained using a formalism in terms of center-of-mass coordinates, which has been previously reported in the literature. The formalism in terms of laboratory coordinates will be useful in the shell-model approach to two-nucleon-induced transitions.
An Examination of Sunspot Number Rates of Growth and Decay in Relation to the Sunspot Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2006-01-01
On the basis of annual sunspot number averages, sunspot number rates of growth and decay are examined relative to both minimum and maximum amplitudes and the time of their occurrences using cycles 12 through present, the most reliably determined sunspot cycles. Indeed, strong correlations are found for predicting the minimum and maximum amplitudes and the time of their occurrences years in advance. As applied to predicting sunspot minimum for cycle 24, the next cycle, its minimum appears likely to occur in 2006, especially if it is a robust cycle similar in nature to cycles 17-23.
NASA Astrophysics Data System (ADS)
Seeber, L.; Armbruster, J. G.
1987-03-01
A systematic search of contemporary newspapers in South Carolina, North Carolina, Georgia and eastern Tennessee during the 1886-1889 (inclusive) aftershock sequence of the August 31, 1886 earthquake near Charleston, South Carolina has provided more than 3000 intensity reports for 522 earthquakes as compared to 144 previously known earthquakes for the same period. Of these 144 events, 138 were felt in Charleston/Summerville and had been assigned epicenters in that area. In contrast the new data provide 112 well-constrained macroseismic epicenters. The 1886-1889 seismicity is characterized by a linear relation between log frequency and magnitude with a slope b≈1, a temporal decay of earthquake frequency proportional to time-1, and a low level of seismicity prior to the main shock. These are frequently observed characteristics of aftershock sequences. By 1889, the level of seismicity had decreased more than 2 orders of magnitude, reaching approximately the current level in the same area. The 1886-1889 epicenters delineate a large aftershock zone that extends northwest about 250 km across Appalachian strike from the coast into the Piedmont and at least 100 km along strike near the Fall Line of South Carolina and Georgia. An abrupt change in stress and/or effective strength is required over this zone. If this change can only occur in the near field of a single fault dislocation, this fault must be larger horizontally than the thickness of the seismogenic zone by an order of magnitude and must be shallow dipping. The correlation between the area of intensity VIII in the main shock with the area of large aftershocks is consistent with this hypothesis. The lack of a major fault affecting the post-Upper Jurassic onlap sediments also favors a shallow dipping active fault, possibly a Paleozoic-Mesozoic southeasterly dipping fault or detachment that may outcrop northwest of the aftershock zone. The 1886-1889 aftershocks occupy the same area as the South Carolina
Analysis of decay dose rates and dose management in the National Ignition Facility.
Khater, Hesham; Brereton, Sandra; Dauffy, Lucile; Hall, Jim; Hansen, Luisa; Kim, Soon; Kohut, Tom; Pohl, Bertram; Sitaraman, Shiva; Verbeke, Jerome; Young, Mitchell
2013-06-01
A detailed model of the Target Bay (TB) at the National Ignition Facility (NIF) has been developed to estimate the post-shot radiation environment inside the facility. The model includes the large number of structures and diagnostic instruments present inside the TB. These structures and instruments are activated by neutrons generated during a shot, and the resultant gamma dose rates are estimated at various decay times following the shot. A set of computational tools was developed to help in estimating potential radiation exposure to TB workers. The results presented in this paper describe the expected radiation environment inside the TB following a low-yield DT shot of 10(16) neutrons. General environment dose rates drop below 30 μSv h(-1) within 3 h following a shot, with higher dose rates observed in the vicinity (~30 cm) of few components. The dose rates drop by more than a factor of two at 1 d following the shot. Dose rate maps of the different TB levels were generated to aid in estimating worker stay-out times following a shot before entry is permitted into the TB. Primary components, including the Target Chamber and diagnostic and beam line components, are constructed of aluminum. Near-term TB accessibility is driven by the decay of the aluminum activation product, 24Na. Worker dose is managed using electronic dosimeters (EDs) self-issued at kiosks using commercial dose management software. The software programs the ED dose and dose rate alarms based on the Radiological Work Permit (RWP) and tracks dose by individual, task, and work group. PMID:23629063
Fritts, Karen R.; Kilb, Debi
2009-01-01
It has been traditionally held that aftershocks occur within one to two fault lengths of the mainshock. Here we demonstrate that this perception has been shaped by the sensitivity of seismic networks. The 31 October 2001 Mw 5.0 and 12 June 2005 Mw 5.2 Anza mainshocks in southern California occurred in the middle of the densely instrumented ANZA seismic network and thus were unusually well recorded. For the June 2005 event, aftershocks as small as M 0.0 could be observed stretching for at least 50 km along the San Jacinto fault even though the mainshock fault was only ∼4.5 km long. It was hypothesized that an observed aseismic slipping patch produced a spatially extended aftershock-triggering source, presumably slowing the decay of aftershock density with distance and leading to a broader aftershock zone. We find, however, the decay of aftershock density with distance for both Anza sequences to be similar to that observed elsewhere in California. This indicates there is no need for an additional triggering mechanism and suggests that given widespread dense instrumentation, aftershock sequences would routinely have footprints much larger than currently expected. Despite the large 2005 aftershock zone, we find that the probability that the 2005 Anza mainshock triggered the M 4.9 Yucaipa mainshock, which occurred 4.2 days later and 72 km away, to be only 14%±1%. This probability is a strong function of the time delay; had the earthquakes been separated by only an hour, the probability of triggering would have been 89%.
The aftershock signature of supershear earthquakes.
Bouchon, Michel; Karabulut, Hayrullah
2008-06-01
Recent studies show that earthquake faults may rupture at speeds exceeding the shear wave velocity of rocks. This supershear rupture produces in the ground a seismic shock wave similar to the sonic boom produced by a supersonic airplane. This shock wave may increase the destruction caused by the earthquake. We report that supershear earthquakes are characterized by a specific pattern of aftershocks: The fault plane itself is remarkably quiet whereas aftershocks cluster off the fault, on secondary structures that are activated by the supershear rupture. The post-earthquake quiescence of the fault shows that friction is relatively uniform over supershear segments, whereas the activation of off-fault structures is explained by the shock wave radiation, which produces high stresses over a wide zone surrounding the fault. PMID:18535239
Aftershocks in a frictional earthquake model.
Braun, O M; Tosatti, Erio
2014-09-01
Inspired by spring-block models, we elaborate a "minimal" physical model of earthquakes which reproduces two main empirical seismological laws, the Gutenberg-Richter law and the Omori aftershock law. Our point is to demonstrate that the simultaneous incorporation of aging of contacts in the sliding interface and of elasticity of the sliding plates constitutes the minimal ingredients to account for both laws within the same frictional model. PMID:25314453
Triggering of earthquake aftershocks by dynamic stresses.
Kilb, D; Gomberg, J; Bodin, P
2000-11-30
It is thought that small 'static' stress changes due to permanent fault displacement can alter the likelihood of, or trigger, earthquakes on nearby faults. Many studies of triggering in the near-field, particularly of aftershocks, rely on these static changes as the triggering agent and consider them only in terms of equivalent changes in the applied load on the fault. Here we report a comparison of the aftershock pattern of the moment magnitude Mw = 7.3 Landers earthquake, not only with static stress changes but also with transient, oscillatory stress changes transmitted as seismic waves (that is, 'dynamic' stresses). Dynamic stresses do not permanently change the applied load and thus can trigger earthquakes only by altering the mechanical state or properties of the fault zone. These dynamically weakened faults may fail after the seismic waves have passed by, and might even cause earthquakes that would not otherwise have occurred. We find similar asymmetries in the aftershock and dynamic stress patterns, the latter being due to rupture propagation, whereas the static stress changes lack this asymmetry. Previous studies have shown that dynamic stresses can promote failure at remote distances, but here we show that they can also do so nearby. PMID:11117741
Effects of Aftershock Declustering in Risk Modeling: Case Study of a Subduction Sequence in Mexico
NASA Astrophysics Data System (ADS)
Kane, D. L.; Nyst, M.
2014-12-01
Earthquake hazard and risk models often assume that earthquake rates can be represented by a stationary Poisson process, and that aftershocks observed in historical seismicity catalogs represent a deviation from stationarity that must be corrected before earthquake rates are estimated. Algorithms for classifying individual earthquakes as independent mainshocks or as aftershocks vary widely, and analysis of a single catalog can produce considerably different earthquake rates depending on the declustering method implemented. As these rates are propagated through hazard and risk models, the modeled results will vary due to the assumptions implied by these choices. In particular, the removal of large aftershocks following a mainshock may lead to an underestimation of the rate of damaging earthquakes and potential damage due to a large aftershock may be excluded from the model. We present a case study based on the 1907 - 1911 sequence of nine 6.9 <= Mw <= 7.9 earthquakes along the Cocos - North American plate subduction boundary in Mexico in order to illustrate the variability in risk under various declustering approaches. Previous studies have suggested that subduction zone earthquakes in Mexico tend to occur in clusters, and this particular sequence includes events that would be labeled as aftershocks in some declustering approaches yet are large enough to produce significant damage. We model the ground motion for each event, determine damage ratios using modern exposure data, and then compare the variability in the modeled damage from using the full catalog or one of several declustered catalogs containing only "independent" events. We also consider the effects of progressive damage caused by each subsequent event and how this might increase or decrease the total losses expected from this sequence.
Relativistic two-photon decay rates with the Lagrange-mesh method
NASA Astrophysics Data System (ADS)
Filippin, Livio; Godefroid, Michel; Baye, Daniel
2016-01-01
Relativistic two-photon decay rates of the 2 s1 /2 and 2 p1 /2 states towards the 1 s1 /2 ground state of hydrogenic atoms are calculated by using numerically exact energies and wave functions obtained from the Dirac equation with the Lagrange-mesh method. This approach is an approximate variational method taking the form of equations on a grid because of the use of a Gauss quadrature approximation. Highly accurate values are obtained by a simple calculation involving different meshes for the initial, final, and intermediate wave functions and for the calculation of matrix elements. The accuracy of the results with a Coulomb potential is improved by several orders of magnitude in comparison with benchmark values from the literature. The general requirement of gauge invariance is also successfully tested, down to rounding errors. The method provides high accuracies for two-photon decay rates of a particle in other potentials and is applied to a hydrogen atom embedded in a Debye plasma simulated by a Yukawa potential.
Sensitivity of β -decay rates to the radial dependence of the nucleon effective mass
NASA Astrophysics Data System (ADS)
Severyukhin, A. P.; Margueron, J.; Borzov, I. N.; Van Giai, N.
2015-03-01
We analyze the sensitivity of β -decay rates in 78Ni and Sn,132100 to a correction term in Skyrme energy-density functionals (EDFs) which modifies the radial shape of the nucleon effective mass. This correction is added on top of several Skyrme parametrizations which are selected from their effective mass properties and predictions about the stability properties of 132Sn . The impact of the correction on high-energy collective modes is shown to be moderate. From the comparison of the effects induced by the surface-peaked effective mass in the three doubly magic nuclei, it is found that 132Sn is largely impacted by the correction, while 78Ni and 100Sn are only moderately affected. We conclude that β -decay rates in these nuclei can be used as a test of different parts of the nuclear EDF: 78Ni and 100Sn are mostly sensitive to the particle-hole interaction through the B (GT) values, while 132Sn is sensitive to the radial shape of the effective mass. Possible improvements of these different parts could therefore be better constrained in the future.
Time Modulation of the {beta}{sup +}-Decay Rate of H-Like {sup 140}Pr{sup 58+} Ions
Ivanov, A. N.; Kryshen, E. L.; Pitschmann, M.; Kienle, P.
2008-10-31
Recent experimental data at GSI on the rates of the number of daughter ions, produced by the nuclear K-shell electron capture (EC) decays of the H-like ions {sup 140}Pr{sup 58+} and {sup 142}Pm{sup 60+}, suggest that they are modulated in time with periods T{sub EC}{approx_equal}7 sec and amplitudes a{sub EC}{approx_equal}0.20. Since it is known that these ions are unstable also under the nuclear positron ({beta}{sup +}) decays, we study a possible time dependence of the nuclear {beta}{sup +}-decay rate of the H-like {sup 140}Pr{sup 58+} ion. We show that the time dependence of the {beta}{sup +}-decay rate of the H-like {sup 140}Pr{sup 58+} ion as well as any H-like heavy ions cannot be observed.
A possible mechanism for aftershocks: time-dependent stress relaxation in a slider-block model
NASA Astrophysics Data System (ADS)
Gran, Joseph D.; Rundle, John B.; Turcotte, Donald L.
2012-08-01
We propose a time-dependent slider-block model which incorporates a time-to-failure function for each block dependent on the stress. We associate this new time-to-failure mechanism with the property of stress fatigue. We test two failure time functions including a power law and an exponential. Failure times are assigned to 'damaged' blocks with stress above a damage threshold, σW and below a static failure threshold, σF. If the stress of a block is below the damage threshold the failure time is infinite. During the aftershock sequence the loader-plate remains fixed and all aftershocks are triggered by stress transfer from previous events. This differs from standard slider-block models which initiate each event by moving the loader-plate. We show the resulting behaviour of the model produces both the Gutenberg-Richter scaling law for event sizes and the Omori's scaling law for the rate of aftershocks when we use the power-law failure time function. The exponential function has limited success in producing Omori's law for the rate of aftershocks. We conclude the shape of the failure time function is key to producing Omori's law.
A New Hybrid STEP/Coulomb model for Aftershock Forecasting
NASA Astrophysics Data System (ADS)
Steacy, S.; Jimenez, A.; Gerstenberger, M.
2014-12-01
Aftershock forecasting models tend to fall into two classes - purely statistical approaches based on clustering, b-value, and the Omori-Utsu law; and Coulomb rate-state models which relate the forecast increase in rate to the magnitude of the Coulomb stress change. Recently, hybrid models combining physical and statistical forecasts have begun to be developed, for example by Bach and Hainzl (2012) and Steacy et al. (2013). The latter approach combined Coulomb stress patterns with the STEP (short-term earthquake probability) model by redistributing expected rate from areas with decreased stress to regions where the stress had increased. The chosen 'Coulomb Redistribution Parameter' (CRP) was 0.93, based on California earthquakes, which meant that 93% of the total rate was expected to occur where the stress had increased. The model was tested against the Canterbury sequence and the main result was that the new model performed at least as well as, and often better than, STEP when tested against retrospective data but that STEP was generally better in pseudo-prospective tests that involved data actually available within the first 10 days of each event of interest. The authors suggested that the major reason for this discrepancy was uncertainty in the slip models and, particularly, in the geometries of the faults involved in each complex major event. Here we develop a variant of the STEP/Coulomb model in which the CRP varies based on the percentage of aftershocks that occur in the positively stressed areas during the forecast learning period. We find that this variant significantly outperforms both STEP and the previous hybrid model in almost all cases, even when the input Coulomb model is quite poor. Our results suggest that this approach might be more useful than Coulomb rate-state when the underlying slip model is not well constrained due to the dependence of that method on the magnitude of the Coulomb stress change.
Recent Experiences in Aftershock Hazard Modelling in New Zealand
NASA Astrophysics Data System (ADS)
Gerstenberger, M.; Rhoades, D. A.; McVerry, G.; Christophersen, A.; Bannister, S. C.; Fry, B.; Potter, S.
2014-12-01
The occurrence of several sequences of earthquakes in New Zealand in the last few years has meant that GNS Science has gained significant recent experience in aftershock hazard and forecasting. First was the Canterbury sequence of events which began in 2010 and included the destructive Christchurch earthquake of February, 2011. This sequence is occurring in what was a moderate-to-low hazard region of the National Seismic Hazard Model (NSHM): the model on which the building design standards are based. With the expectation that the sequence would produce a 50-year hazard estimate in exceedance of the existing building standard, we developed a time-dependent model that combined short-term (STEP & ETAS) and longer-term (EEPAS) clustering with time-independent models. This forecast was combined with the NSHM to produce a forecast of the hazard for the next 50 years. This has been used to revise building design standards for the region and has contributed to planning of the rebuilding of Christchurch in multiple aspects. An important contribution to this model comes from the inclusion of EEPAS, which allows for clustering on the scale of decades. EEPAS is based on three empirical regressions that relate the magnitudes, times of occurrence, and locations of major earthquakes to regional precursory scale increases in the magnitude and rate of occurrence of minor earthquakes. A second important contribution comes from the long-term rate to which seismicity is expected to return in 50-years. With little seismicity in the region in historical times, a controlling factor in the rate is whether-or-not it is based on a declustered catalog. This epistemic uncertainty in the model was allowed for by using forecasts from both declustered and non-declustered catalogs. With two additional moderate sequences in the capital region of New Zealand in the last year, we have continued to refine our forecasting techniques, including the use of potential scenarios based on the aftershock
Vacuum stability and Higgs diphoton decay rate in the Zee-Babu model
NASA Astrophysics Data System (ADS)
Chao, Wei; Zhang, Jian-Hui; Zhang, Yongchao
2013-06-01
Although recent Higgs data from ATLAS and CMS are compatible with a Standard Model (SM) signal at 2σ level, both experiments see indications for an excess in the diphoton decay channel, which points to new physics beyond the SM. Given such a low Higgs mass m H ~ 125 GeV, another sign indicating the existence of new physics beyond the SM is the vacuum stability problem, i.e., the SM Higgs quartic coupling may run to negative values at a scale below the Planck scale. In this paper, we study the vacuum stability and enhanced Higgs diphoton decay rate in the Zee-Babu model, which was used to generate tiny Majorana neutrino masses at two-loop level. We find that it is rather difficult to find overlapping regions allowed by the vacuum stability and diphoton enhancement constraints. As a consequence, it is almost inevitable to introduce new ingredients into the model, in order to resolve these two issues simultaneously.
Gerrard, Andrew; Lanzerotti, Louis; Gkioulidou, Matina; Mitchell, Donald; Manweiler, Jerry; Bortnik, Jacob; Keika, Kunihiro
2014-01-01
H-ion (∼45 keV to ∼600 keV), He-ion (∼65 keV to ∼520 keV), and O-ion (∼140 keV to ∼1130 keV) integral flux measurements, from the Radiation Belt Storm Probe Ion Composition Experiment (RBSPICE) instrument aboard the Van Allan Probes spacecraft B, are reported. These abundance data form a cohesive picture of ring current ions during the first 9 months of measurements. Furthermore, the data presented herein are used to show injection characteristics via the He-ion/H-ion abundance ratio and the O-ion/H-ion abundance ratio. Of unique interest to ring current dynamics are the spatial-temporal decay characteristics of the two injected populations. We observe that He-ions decay more quickly at lower L shells, on the order of ∼0.8 day at L shells of 3–4, and decay more slowly with higher L shell, on the order of ∼1.7 days at L shells of 5–6. Conversely, O-ions decay very rapidly (∼1.5 h) across all L shells. The He-ion decay time are consistent with previously measured and calculated lifetimes associated with charge exchange. The O-ion decay time is much faster than predicted and is attributed to the inclusion of higher-energy (> 500 keV) O-ions in our decay rate estimation. We note that these measurements demonstrate a compelling need for calculation of high-energy O-ion loss rates, which have not been adequately studied in the literature to date. Key Points We report initial observations of ring current ions We show that He-ion decay rates are consistent with theory We show that O-ions with energies greater than 500 keV decay very rapidly PMID:26167435
Tracing nitrogen accumulation in decaying wood and examining its impact on wood decomposition rate
NASA Astrophysics Data System (ADS)
Rinne, Katja T.; Rajala, Tiina; Peltoniemi, Krista; Chen, Janet; Smolander, Aino; Mäkipää, Raisa
2016-04-01
Decomposition of dead wood, which is controlled primarily by fungi is important for ecosystem carbon cycle and has potentially a significant role in nitrogen fixation via diazotrophs. Nitrogen content has been found to increase with advancing wood decay in several studies; however, the importance of this increase to decay rate and the sources of external nitrogen remain unclear. Improved knowledge of the temporal dynamics of wood decomposition rate and nitrogen accumulation in wood as well as the drivers of the two processes would be important for carbon and nitrogen models dealing with ecosystem responses to climate change. To tackle these questions we applied several analytical methods on Norway spruce logs from Lapinjärvi, Finland. We incubated wood samples (density classes from I to V, n=49) in different temperatures (from 8.5oC to 41oC, n=7). After a common seven day pre-incubation period at 14.5oC, the bottles were incubated six days in their designated temperature prior to CO2 flux measurements with GC to determine the decomposition rate. N2 fixation was measured with acetylene reduction assay after further 48 hour incubation. In addition, fungal DNA, (MiSeq Illumina) δ15N and N% composition of wood for samples incubated at 14.5oC were determined. Radiocarbon method was applied to obtain age distribution for the density classes. The asymbiotic N2 fixation rate was clearly dependent on the stage of wood decay and increased from stage I to stage IV but was substantially reduced in stage V. CO2 production was highest in the intermediate decay stage (classes II-IV). Both N2 fixation and CO2 production were highly temperature sensitive having optima in temperature 25oC and 31oC, respectively. We calculated the variation of annual levels of respiration and N2 fixation per hectare for the study site, and used the latter data together with the 14C results to determine the amount of N2 accumulated in wood in time. The proportion of total nitrogen in wood
Indoor acrolein emission and decay rates resulting from domestic cooking events
NASA Astrophysics Data System (ADS)
Seaman, Vincent Y.; Bennett, Deborah H.; Cahill, Thomas M.
2009-12-01
Acrolein (2-propenal) is a common constituent of both indoor and outdoor air, can exacerbate asthma in children, and may contribute to other chronic lung diseases. Recent studies have found high indoor levels of acrolein and other carbonyls compared to outdoor ambient concentrations. Heated cooking oils produce considerable amounts of acrolein, thus cooking is likely an important source of indoor acrolein. A series of cooking experiments were conducted to determine the emission rates of acrolein and other volatile carbonyls for different types of cooking oils (canola, soybean, corn and olive oils) and deep-frying different food items. Similar concentrations and emission rates of carbonyls were found when different vegetable oils were used to deep-fry the same food product. The food item being deep-fried was generally not a significant source of carbonyls compared to the cooking oil. The oil cooking events resulted in high concentrations of acrolein that were in the range of 26.4-64.5 μg m -3. These concentrations exceed all the chronic regulatory exposure limits and many of the acute exposure limits. The air exchange rate and the decay rate of the carbonyls were monitored to estimate the half-life of the carbonyls. The half-life for acrolein was 14.4 ± 2.6 h, which indicates that indoor acrolein concentrations can persist for considerable time after cooking in poorly-ventilated homes.
How ubiquitous are aftershock sequences driven by high pressure fluids at depth?
NASA Astrophysics Data System (ADS)
Miller, S. A.
2008-12-01
Strong evidence suggests that two earthquake-aftershock episodes, the 2004 Niigata (Japan) sequence and the 1997 Umbria-Marche (Italy) sequence, were driven by high pressure fluids at depth. Since Niigata was in a compressional environment and Umbria-Marche in extension, a question arises about whether such a mechanism is more general than just these two cases. Although it is not clear by what mechanism fluids of sufficient volume can be trapped in the lower crust, if such pockets of high pressure fluids exist, then they must necessarily be expelled when a large earthquake provides the hydraulic connection to the hydrostatically pressured free surface. In this talk, aftershock data is analyzed for a number of different earthquakes in a variety of tectonic settings, including 1992 Landers, 1994 Northridge, and the 2001 Bhuj earthquakes. Comparisons are made between model results of the evolved fluid pressure state from a high pressure source at depth, and the spatio-temporal distributions of aftershocks. The data is further analyzed and compared with model results for differences in the rate of aftershocks (p-value in Omori's Law) and their dependence on the orientation of the mainshock relative to the prevailing regional stress field.
Cooperative Lamb shift and the cooperative decay rate for an initially detuned phased state
Friedberg, Richard; Manassah, Jamal T.
2010-04-15
The cooperative Lamb shift (CLS) is hard to measure because in samples much larger than a resonant wavelength it is much smaller, for an initially prepared resonantly phased state, than the cooperative decay rate (CDR). We show, however, that if the phasing of the initial state is detuned so that the spatial wave vector is k{sub 1} congruent with k{sub 0{+-}}O((1/R)) (where k{sub 0}={omega}{sub 0}/c is the resonant frequency), the CLS grows to 'giant' magnitudes making it comparable to the CDR. Moreover, for certain controlled values of detuning, the initial CDR becomes small so that the dynamical Lamb shift (DLS) can be measured over a considerable period of time.
Spontaneous decay rate and Casimir-Polder potential of an atom near a lithographed surface
NASA Astrophysics Data System (ADS)
Bennett, Robert
2015-08-01
Radiative corrections to an atom are calculated near a half-space that has arbitrarily shaped small depositions upon its surface. The method is based on calculation of the classical Green's function of the macroscopic Maxwell equations near an arbitrarily perturbed half-space using a Born-series expansion about the bare half-space Green's function. The formalism of macroscopic quantum electrodynamics is used to carry this over into the quantum picture. The broad utility of the calculated Green's function is demonstrated by using it to calculate two quantities: the spontaneous decay rate of an atom near a sharp surface feature and the Casimir-Polder potential of a finite grating deposited on a substrate. Qualitatively different behavior is found for the latter case where it is observed that the periodicity of the Casimir-Polder potential persists even outside the immediate vicinity of the grating.
Combined results on b-hadron production rates, lifetimes, oscillations and semileptonic decays
WIllocq, stephane
2000-08-02
Combined results on b-hadron lifetimes, b-hadron production rates B{sub d}{sup 0}--Anti-B{sub d}{sup 0} and B{sub s}{sup 0}--Anti-B{sub s}{sup 0} oscillations, the decay width difference between the mass eigenstates of the B{sub s}{sup 0}--Anti-B{sub s}{sup 0} system, and the values of the CKM matrix elements {vert_bar}V{sub cb}{vert_bar} and {vert_bar}V{sub ub}{vert_bar} are obtained from published and preliminary measurements available in Summer 99 from the ALEPH, CDF, DELPHI, L3, OPAL and SLD Collaborations.
Rate of Temperature Decay in Human Muscle Following 3 MHz Ultrasound: The Stretching Window Revealed
Draper, David O.; Ricard, Mark D.
1995-01-01
Researchers have determined that when therapeutic ultrasound vigorously heats connective tissue, it can be effective in increasing extensibility of collagen affected by scar tissue. These findings give credence to the use of continuous thermal ultrasound to heat tissue before stretching, exercise, or friction massage in an effort to decrease joint contractures and increase range of motion. Before our investigation, it was not known how long following an ultrasound treatment the tissue will remain at a vigorous heating level (>3°C). We conducted this study to determine the rate of temperature decay following 3 MHz ultrasound, in order to determine the time period of optimal stretching. Twenty subjects had a 23-gauge hypodermic needle microprobe inserted 1.2 cm deep into the medial aspect of their anesthetized triceps surae muscle. Subjects then received a 3 MHz ultrasound treatment at 1.5 W/cm2 until the tissue temperature was increased at least 5°C. The mean baseline temperature before each treatment was 33.8 ± 1.3°C, and it peaked at 39.1 ± 1.2°C from the ultrasound. Immediately following the treatment, we recorded the rate at which the temperature dropped at 30-second intervals. We ran a stepwise nonlinear regression analysis to predict temperature decay as a function of time following ultrasound treatment. We found a significant nonlinear relationship between time and temperature decay. The average time it took for the temperature to drop each degree as expressed in minutes and seconds was: 1°C = 1:20; 2°C = 3:22; 3°C = 5:50; 4°C = 9:13; 5°C = 14:55; 5.3°C = 18:00 (baseline). We conclude that under similar circumstances where the tissue temperature is raised 5°C, stretching will be effective, on average, for 3.3 minutes following an ultrasound treatment. To increase this stretching window, we suggest that stretching be applied during and immediately after ultrasound application. ImagesFig 1.Fig 2. PMID:16558352
The effects of supramolecular assembly on exciton decay rates in organic semiconductors
NASA Astrophysics Data System (ADS)
Daniel, Clément; Makereel, François; Herz, Laura M.; Hoeben, Freek J. M.; Jonkheijm, Pascal; Schenning, Albertus P. H. J.; Meijer, E. W.; Friend, Richard H.; Silva, Carlos
2005-08-01
We present time-resolved photoluminescence measurements on two series of oligo-p-phenylenevinylene (OPV) materials that are functionalized with quadruple hydrogen-bonding groups. These form supramolecular assemblies with thermotropic reversibility. The morphology of the assemblies depends on the way that the oligomers are functionalized; monofunctionalized OPVs (MOPVs) form chiral, helical stacks while bifunctionalized OPVs (BOPVs) form less organized structures. These are therefore model systems to investigate the effects of supramolecular assembly, the effects of morphology, and the dependence of oligomer length on the radiative and nonradiative rates of π-conjugated materials. The purpose of this work is to use MOPV and BOPV derivatives as model systems to study the effect of intermolecular interactions on the molecular photophysics by comparing optical properties in the dissolved phase and the supramolecular assemblies. A simple photophysical analysis allows us to extract the intrinsic radiative and nonradiative decay rates and to unravel the consequences of interchromophore coupling with unprecedented detail. We find that interchromophore coupling strongly reduces both radiative and intrinsic nonradiative rates and that the effect is more pronounced in short oligomers.
Pressure Decay Testing Methodology for Quantifying Leak Rates of Full-Scale Docking System Seals
NASA Technical Reports Server (NTRS)
Dunlap, Patrick H., Jr.; Daniels, Christopher C.; Wasowski, Janice L.; Garafolo, Nicholas G.; Penney, Nicholas; Steinetz, Bruce M.
2010-01-01
NASA is developing a new docking system to support future space exploration missions to low-Earth orbit and the Moon. This system, called the Low Impact Docking System, is a mechanism designed to connect the Orion Crew Exploration Vehicle to the International Space Station, the lunar lander (Altair), and other future Constellation Project vehicles. NASA Glenn Research Center is playing a key role in developing the main interface seal for this docking system. This seal will be relatively large with an outside diameter in the range of 54 to 58 in. (137 to 147 cm). As part of this effort, a new test apparatus has been designed, fabricated, and installed to measure leak rates of candidate full-scale seals under simulated thermal, vacuum, and engagement conditions. Using this test apparatus, a pressure decay testing and data processing methodology has been developed to quantify full-scale seal leak rates. Tests performed on untreated 54 in. diameter seals at room temperature in a fully compressed state resulted in leak rates lower than the requirement of less than 0.0025 lbm, air per day (0.0011 kg/day).
Lee, Choon Weng; Ng, Angie Yee Fang; Bong, Chui Wei; Narayanan, Kumaran; Sim, Edmund Ui Hang; Ng, Ching Ching
2011-02-01
Using the size fractionation method, we measured the decay rates of Escherichia coli, Salmonella Typhi and Vibrio parahaemolyticus in the coastal waters of Peninsular Malaysia. The size fractions were total or unfiltered, <250 μm, <20 μm, <2 μm, <0.7 μm, <0.2 μm and <0.02 μm. We also carried out abiotic (inorganic nutrients) and biotic (bacterial abundance, production and protistan bacterivory) measurements at Port Dickson, Klang and Kuantan. Klang had highest nutrient concentrations whereas both bacterial production and protistan bacterivory rates were highest at Kuantan. We observed signs of protist-bacteria coupling via the following correlations: Protistan bacterivory-Bacterial Production: r = 0.773, df = 11, p < 0.01; Protist-Bacteria: r = 0.586, df = 12, p < 0.05. However none of the bacterial decay rates were correlated with the biotic variables measured. E. coli and Salmonella decay rates were generally higher in the larger fraction (>0.7 μm) than in the smaller fraction (<0.7 μm) suggesting the more important role played by protists. E. coli and Salmonella also decreased in the <0.02 μm fraction and suggested that these non-halophilic bacteria did not survive well in seawater. In contrast, Vibrio grew well in seawater. There was usually an increase in Vibrio after one day incubation. Our results confirmed that decay or loss rates of E. coli did not match that of Vibrio, and also did not correlate with Salmonella decay rates. However E. coli showed persistence where its decay rates were generally lower than Salmonella. PMID:21146847
NASA Astrophysics Data System (ADS)
Trofymow, J. A.; Smyth, C.; Moore, T.; Prescott, C.; Titus, B.; Siltanen, M.; Visser, S.; Preston, C. M.; Nault, J.
2009-12-01
Litter decay in early and midphases of decomposition have been shown to highly influenced by climate and substrate quality, however factors affecting decay during the late semi-stable phase are less well understood. The Canadian Intersite Decomposition Experiment (CIDET) was established in 1992 with the objective of providing data on the long-term rates of litter decomposition and nutrient mineralization for a range of forested ecoclimatic regions in Canada. Such data were needed to help verify models used for national C accounting, as well as aid in the development of other soil C models. CIDET examined the annual decay, over a 12-year period, of 10 standard foliar litters and 2 wood substrates at 18 forested upland and 3 wetland sites ranging from the cool temperate to subarctic regions, a nearly 20oC span in temperature. On a subset of sites and litter types, changes in litter C chemistry over time were also determined. Over the first 6 years, C/N ratio and iron increased, NMR showed an overall decline in O-alkyl C (carbohydrates) and increase in alkyl, aromatic, phenolic, and carboxyl C. Proximate analysis showed the acid unhydrolyzable residue (AUR) increases, but true lignin did not accumulate, in contrast to the conceptual ligno-cellulose model of decomposition. Litter decay during first phase was related to initial litter quality (AUR and water soluble extract), winter precipitation, but not temperature, suggesting the importance of leaching during this phase. Decay rate “k” during the mid phase was related to temperature, initial litter quality (AUR and AUR/N), summer precipitation, but not soil N. In most cases decay had approached an asymptote before end of experiment. Although annual temperature was the best single predictor for 12-year asymptotes, summer precipitation and forest floor pH and C/N ratio were the best set of combined predictors. The changes in the decay factors during different phases may explain some of the discrepancies in the
Comparison of Early Aftershocks for the 2004 Mid-Niigata and 2007 Noto Hanto Earthquakes in Japan
NASA Astrophysics Data System (ADS)
Mori, J.; Kano, Y.; Enescu, B.
2007-12-01
We compared the aftershock sequences of the similar 2004 Mid-Niigata (Mw6.6) and 2007 Noto Hanto (Mw6.7) earthquakes in central Japan. Although the two mainshocks had similar size, depth, and focal mechanisms, the numbers of aftershocks were quite different, with the Niigata mainshock producing a much stronger sequence. We examined the continuously recorded data from nearby Hi-Net stations operated by the National Institute for Earth Science and Disaster Prevention (NIED), to identify the early aftershocks following both mainshocks. A 5 hz high-pass filter was chosen to facilitate identification of the high-frequency arrivals from individual aftershocks. We used 6 stations distributed at distances within about 30 km. Aftershocks were identified by looking at large printouts of the continuous records for the six stations and peak amplitudes were measured to calculate the magnitude. The magnitude determination using these high-pass filtered records was calibrated by using a set of 30 earthquakes that were also listed in the catalog of the Japan Meteorological Agency (JMA). We estimate that the completeness level of small aftershocks is about Mj3.5. The event counts show that the aftershock sequences of the two earthquakes were quite similar for about the first 7 minutes. Following that time, the Niigata aftershocks clearly continue at a much higher rate which is about 3 times the rate of the Noto earthquake. The time where the rates diverge corresponds to the occurrence of a Mj6.3 earthquake in the Niigata sequence. This pattern can be seen in both the plots for the Mj¡Ý3.5 and Mj¡Ý4.0 events. Since there are more earthquakes for the Mj¡Ý3.5 data set, the time resolution is better. These results show an enhanced triggering of aftershocks for the Niigata sequence several minutes after the mainshock. The Niigata region is an area of hydrocarbon production with regions of high pressure fluids, and Sibson (2007) proposes that the swarm-like behavior is due to
NASA Astrophysics Data System (ADS)
Moosavi Nejad, S. M.
2016-04-01
Basically, the energy distribution of bottom-flavored hadrons produced through polarized top quark decays t (↑) →W+ + b (→Xb), is governed by the unpolarized rate and the polar and the azimuthal correlation functions which are related to the density matrix elements of the decay t (↑) → bW+. Here we present, for the first time, the analytical expressions for the O (αs) radiative corrections to the differential azimuthal decay rates of the partonic process t (↑) → b +W+ in two helicity systems, which are needed to study the azimuthal distribution of the energy spectrum of the hadrons produced in polarized top decays. These spin-momentum correlations between the top quark spin and its decay product momenta will allow the detailed studies of the top decay mechanism. Our predictions of the hadron energy distributions also enable us to deepen our knowledge of the hadronization process and to test the universality and scaling violations of the bottom-flavored meson fragmentation functions.
NASA Astrophysics Data System (ADS)
Yang, Wenzheng; Ben-Zion, Yehuda
2009-05-01
Aftershock sequences are commonly observed but their properties vary from region to region. Ben-Zion and Lyakhovsky developed a solution for aftershocks decay in a damage rheology model. The solution indicates that the productivity of aftershocks decreases with increasing value of a non-dimensional material parameter R, given by the ratio of timescale for brittle deformation to timescale for viscous relaxation. The parameter R is inversely proportional to the degree of seismic coupling and is expected to increase primarily with increasing temperature and also with existence of sedimentary rocks at seismogenic depth. To test these predictions, we use aftershock sequences from several southern California regions. We first analyse properties of individual aftershock sequences generated by the 1992 Landers and 1987 Superstition Hills earthquakes. The results show that the ratio of aftershock productivities in these sequences spanning four orders of event magnitudes is similar to the ratio of the average heat flow in the two regions. To perform stronger statistical tests, we systematically analyse the average properties of stacked aftershock sequences in five regions. In each region, we consider events with magnitudes between 4.0 and 6.0 to be main shocks. For each main shock, we consider events to be aftershocks if they occur in the subsequent 50 d, within a circular region that scales with the magnitude of the main shock and in the magnitude range between that of the main shock and 2 units lower. This procedure produces 28-196 aftershock sequences in each of the five regions. We stack the aftershock sequences in each region and analyse the properties of the stacked data. The results indicate that the productivities of the stacked sequences are inversely correlated with the heat flow and existence of deep sedimentary covers, in agreement with the damage model predictions. Using the observed ratios of aftershock productivities, along with simple expressions based on the
Local near instantaneously dynamically triggered aftershocks of large earthquakes.
Fan, Wenyuan; Shearer, Peter M
2016-09-01
Aftershocks are often triggered by static- and/or dynamic-stress changes caused by mainshocks. The relative importance of the two triggering mechanisms is controversial at near-to-intermediate distances. We detected and located 48 previously unidentified large early aftershocks triggered by earthquakes with magnitudes between ≥7 and 8 within a few fault lengths (approximately 300 kilometers), during times that high-amplitude surface waves arrive from the mainshock (less than 200 seconds). The observations indicate that near-to-intermediate-field dynamic triggering commonly exists and fundamentally promotes aftershock occurrence. The mainshocks and their nearby early aftershocks are located at major subduction zones and continental boundaries, and mainshocks with all types of faulting-mechanisms (normal, reverse, and strike-slip) can trigger early aftershocks. PMID:27609887
Litvinov, Yu A; Bosch, F; Geissel, H; Kurcewicz, J; Patyk, Z; Winckler, N; Batist, L; Beckert, K; Boutin, D; Brandau, C; Chen, L; Dimopoulou, C; Fabian, B; Faestermann, T; Fragner, A; Grigorenko, L; Haettner, E; Hess, S; Kienle, P; Knöbel, R; Kozhuharov, C; Litvinov, S A; Maier, L; Mazzocco, M; Montes, F; Münzenberg, G; Musumarra, A; Nociforo, C; Nolden, F; Pfützner, M; Plass, W R; Prochazka, A; Reda, R; Reuschl, R; Scheidenberger, C; Steck, M; Stöhlker, T; Torilov, S; Trassinelli, M; Sun, B; Weick, H; Winkler, M
2007-12-31
We report on the first measurement of the beta+ and orbital electron-capture decay rates of 140Pr nuclei with the simplest electron configurations: bare nuclei, hydrogenlike, and heliumlike ions. The measured electron-capture decay constant of hydrogenlike 140Pr58+ ions is about 50% larger than that of heliumlike 140Pr57+ ions. Moreover, 140Pr ions with one bound electron decay faster than neutral 140Pr0+ atoms with 59 electrons. To explain this peculiar observation one has to take into account the conservation of the total angular momentum, since only particular spin orientations of the nucleus and of the captured electron can contribute to the allowed decay. PMID:18233571
NASA Astrophysics Data System (ADS)
Rogers, Blake A.
This thesis investigates the design of interplanetary missions for the continual habitation of Mars via Earth-Mars cyclers and for the detection of variations in nuclear decay rates due to solar influences. Several cycler concepts have been proposed to provide safe and comfortable quarters for astronauts traveling between the Earth and Mars. However, no literature has appeared to show how these massive vehicles might be placed into their cycler trajectories. Trajectories are designed that use either Vinfinity leveraging or low thrust to establish cycler vehicles in their desired orbits. In the cycler trajectory cases considered, the use of Vinfinity leveraging or low thrust substantially reduces the total propellant needed to achieve the cycler orbit compared to direct orbit insertion. In the case of the classic Aldrin cycler, the propellant savings due to Vinfinity leveraging can be as large as a 24 metric ton reduction for a cycler vehicle with a dry mass of 75 metric tons, and an additional 111 metric ton reduction by instead using low thrust. The two-synodic period cyclers considered benefit less from Vinfinity leveraging, but have a smaller total propellant mass due to their lower approach velocities at Earth and Mars. It turns out that, for low-thrust establishment, the propellant required is approximately the same for each of the cycler trajectories. The Aldrin cycler has been proposed as a transportation system for human missions between Earth and Mars. However, the hyperbolic excess velocity values at the planetary encounters for these orbits are infeasibly large, especially at Mars. In a new version of the Aldrin cycler, low thrust is used in the interplanetary trajectories to reduce the encounter velocities. Reducing the encounter velocities at both planets reduces the propellant needed by the taxis (astronauts use these taxis to transfer between the planetary surfaces and the cycler vehicle) to perform hyperbolic rendezvous. While the propellant
NASA Astrophysics Data System (ADS)
Zhong, Q.; Shi, B.
2011-12-01
The disaster of the Ms 7.8 earthquake occurred in Tangshan, China, on July 28th 1976 caused at least 240,000 deaths. The mainshock was followed by two largest aftershocks, the Ms 7.1 occurred after 15 hr later of the mainshock, and the Ms 6.9 occurred on 15 November. The aftershock sequence is lasting to date, making the regional seismic activity rate around the Tangshan main fault much higher than that of before the main event. If these aftershocks are involved in the local main event catalog for the PSHA calculation purpose, the resultant seismic hazard calculation will be overestimated in this region and underestimated in other place. However, it is always difficult to accurately determine the time duration of aftershock sequences and identifies the aftershocks from main event catalog for seismologist. In this study, by using theoretical inference and empirical relation given by Dieterich, we intended to derive the plausible time length of aftershock sequences of the Ms 7.8 Tangshan earthquake. The aftershock duration from log-log regression approach gives us about 120 years according to the empirical Omori's relation. Based on Dietrich approach, it has been claimed that the aftershock duration is a function of remote shear stressing rate, normal stress acting on the fault plane, and fault frictional constitutive parameters. In general, shear stressing rate could be estimated in three ways: 1. Shear stressing rate could be written as a function of static stress drop and a mean earthquake recurrence time. In this case, the time length of aftershock sequences is about 70-100 years. However, since the recurrence time inherits a great deal of uncertainty. 2. Ziv and Rubin derived a general function between shear stressing rate, fault slip speed and fault width with a consideration that aftershock duration does not scale with mainshock magnitude. Therefore, from Ziv's consideration, the aftershock duration is about 80 years. 3. Shear stressing rate is also can be
Minimally allowed neutrinoless double beta decay rates within an anarchical framework
NASA Astrophysics Data System (ADS)
Jenkins, James
2009-06-01
Neutrinoless double beta decay (ββ0ν) is the only realistic probe of the Majorana nature of the neutrino. In the standard picture, its rate is proportional to mee, the e-e element of the Majorana neutrino mass matrix in the flavor basis. I explore minimally allowed mee values within the framework of mass matrix anarchy where neutrino parameters are defined statistically at low energies. Distributions of mixing angles are well defined by the Haar integration measure, but masses are dependent on arbitrary weighting functions and boundary conditions. I survey the integration measure parameter space and find that for sufficiently convergent weightings, mee is constrained between (0.01-0.4) eV at 90% confidence. Constraints from neutrino mixing data lower these bounds. Singular integration measures allow for arbitrarily small mee values with the remaining elements ill-defined, but this condition constrains the flavor structure of the model’s ultraviolet completion. ββ0ν bounds below mee˜5×10-3eV should indicate symmetry in the lepton sector, new light degrees of freedom, or the Dirac nature of the neutrino.
Minimally allowed neutrinoless double beta decay rates within an anarchical framework
Jenkins, James
2009-06-01
Neutrinoless double beta decay ({beta}{beta}0{nu}) is the only realistic probe of the Majorana nature of the neutrino. In the standard picture, its rate is proportional to m{sub ee}, the e-e element of the Majorana neutrino mass matrix in the flavor basis. I explore minimally allowed m{sub ee} values within the framework of mass matrix anarchy where neutrino parameters are defined statistically at low energies. Distributions of mixing angles are well defined by the Haar integration measure, but masses are dependent on arbitrary weighting functions and boundary conditions. I survey the integration measure parameter space and find that for sufficiently convergent weightings, m{sub ee} is constrained between (0.01-0.4) eV at 90% confidence. Constraints from neutrino mixing data lower these bounds. Singular integration measures allow for arbitrarily small m{sub ee} values with the remaining elements ill-defined, but this condition constrains the flavor structure of the model's ultraviolet completion. {beta}{beta}0{nu} bounds below m{sub ee}{approx}5x10{sup -3} eV should indicate symmetry in the lepton sector, new light degrees of freedom, or the Dirac nature of the neutrino.
Precision measurement of the decay rate of the negative positronium ion Ps{sup -}
Ceeh, Hubert; Hugenschmidt, Christoph; Schreckenbach, Klaus; Gaertner, Stefan A.; Thirolf, Peter G.; Fleischer, Frank; Schwalm, Dirk
2011-12-15
The negative positronium ion Ps{sup -} is a bound system consisting of two electrons and a positron. Its three constituents are pointlike leptonic particles of equal mass, which are subject only to the electroweak and gravitational force. Hence, Ps{sup -} is an ideal object in which to study the quantum mechanics of a three-body system. The ground state of Ps{sup -} is stable against dissociation but unstable against annihilation into photons. We report here on a precise measurement of the Ps{sup -} ground-state decay rate {Gamma}, which was carried out at the high-intensity NEutron induced POsitron source MUniCh (NEPOMUC) at the research reactor FRM II in Garching. A value of {Gamma}=2.0875(50) ns{sup -1} was obtained, which is three times more precise than previous experiments and in agreement with most recent theoretical predictions. The achieved experimental precision is at the level of the leading corrections in the theoretical predictions.
Contributions of the W-boson propagator to the μ and τ leptonic decay rates
NASA Astrophysics Data System (ADS)
Ferroglia, Andrea; Greub, Christoph; Sirlin, Alberto; Zhang, Zhibai
2013-08-01
We derive closed expressions and useful expansions for the contributions of the tree-level W-boson propagator to the muon and τ leptonic decay rates. Calling M and m the masses of the initial and final charged leptons, our results in the limit m=0 are valid to all orders in M2/MW2. In the terms of O(mj2/MW2) (mj=M, m), our leading corrections, of O(M2/MW2), agree with the canonical value (3/5)M2/MW2, while the coefficient of our subleading contributions, of O(m2/MW2), differs from that reported in the recent literature. A possible explanation of the discrepancy is presented. The numerical effect of the O(mj2/MW2) corrections is briefly discussed. A general expression, valid for arbitrary values of MW, M, and m in the range MW>M>m, is given in the Appendix. The paper also contains a review of the traditional definition and evaluation of the Fermi constant.
Using the Inflection Points and Rates of Growth and Decay to Predict Levels of Solar Activity
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2008-01-01
The ascending and descending inflection points and rates of growth and decay at specific times during the sunspot cycle are examined as predictors for future activity. On average, the ascending inflection point occurs about 1-2 yr after sunspot minimum amplitude (Rm) and the descending inflection point occurs about 6-7 yr after Rm. The ascending inflection point and the inferred slope (including the 12-mo moving average (12-mma) of (Delta)R (the month-to-month change in the smoothed monthly mean sunspot number (R)) at the ascending inflection point provide strong indications as to the expected size of the ongoing cycle s sunspot maximum amplitude (RM), while the descending inflection point appears to provide an indication as to the expected length of the ongoing cycle. The value of the 12-mma of (Delta)R at elapsed time T = 27 mo past the epoch of RM (E(RM)) seems to provide a strong indication as to the expected size of Rm for the following cycle. The expected Rm for cycle 24 is 7.6 +/- 4.4 (the 90-percent prediction interval), occurring before September 2008. Evidence is also presented for secular rises in selected cycle-related parameters and for preferential grouping of sunspot cycles by amplitude and/or period.
Probing Anderson localization of light via decay rate statistics in aperiodic Vogel spirals
NASA Astrophysics Data System (ADS)
Christofi, Aristi; Pinheiro, Felipe A.; Dal Negro, Luca
We systematically investigate the spectral properties of different types of two-dimensional aperiodic Vogel spiral arrays of pointlike scatterers and three-dimensional metamaterials with Vogel spiral chirality using rigorous Green's function spectral method. We considered an efficient T-matrix approach to analyze multiple-scattering effects, including all scattering orders, and to understand localization properties through the statistics of the Green's matrix eigenvalues. The knowledge of the spectrum of the Green matrix of multi-particle scattering systems provides important information on the character of light propagation and localization in chiral media with deterministic aperiodic geometry. In particular, we analyze for the first time the statistics of the eigenvalues and eigenvectors of the Green matrix and extract the decay rates of the eigenmodes, their inverse participation ratio (IPR), the Wigner delay times and their quality factors. We emphasize the unique properties of aperiodic Vogel spirals with respect to random scattering media, which have been investigated so far. This work was supported by the Army Research Laboratory under Cooperative Agreement Number W911NF-12-2-0023.
NASA Technical Reports Server (NTRS)
Kautz, Harold E.
1993-01-01
Lowest symmetric and lowest antisymmetric plate wave modes were excited and identified in broad-band acousto-ultrasonic (AU) signals collected from various high temperature composite materials. Group velocities have been determined for these nearly nondispersive modes. An algorithm has been developed and applied to determine phase velocities and hence dispersion curves for the frequency ranges of the broad-band pulses. It is demonstrated that these data are sensitive to changes in the various stiffness moduli of the materials, in agreement by analogy, with the theoretical and experimental results of Tang and Henneke on fiber reinforced polymers. Diffuse field decay rates have been determined in the same specimen geometries and AU configuration as for the plate wave measurements. These decay rates are of value in assessing degradation such as matrix cracking in ceramic matrix composites. In addition, we verify that diffuse field decay rates respond to fiber/matrix interfacial shear strength and density in ceramic matrix composites. This work shows that velocity/stiffness and decay rate measurements can be obtained in the same set of AU experiments for characterizing materials and in specimens with geometries useful for mechanical measurements.
NASA Astrophysics Data System (ADS)
Chung, H. Y.; Leung, P. T.; Tsai, D. P.
2012-10-01
A comprehensive study is presented on the decay rates of excited molecules in the vicinity of a magnetodielectric material of spherical geometry via electrodynamic modeling. Both the models based on a driven-damped harmonic oscillator and on energy transfers will be applied so that the total decay rates can be rigorously decomposed into the radiative and the nonradiative rates. Clarifications of the equivalence of these two models for arbitrary geometry will be provided. Different possible orientations and locations of the molecule are studied with the molecule being placed near a spherical particle or a cavity. Among other results, TE modes are observed which can be manifested via nonradiative transfer from a tangential dipole within a small range of dissipation parameters set for the spherical particle. In addition, spectral analysis shows that decay rates at such a particle with small absorption are largely dominated by radiative transfer except at multipolar resonances when nonradiative transfer becomes prominent, and relatively unmodified decay is possible when negative refraction takes place.
Analysis of the 2012 Oct 27 Haida Gwaii Aftershock Sequence
NASA Astrophysics Data System (ADS)
Mulder, T.; Brillon, C.; Bentkowski, W.; White, M.; Rosenberger, A.; Rogers, G. C.; Vernon, F.; Kao, H.
2013-12-01
The magnitude 7.7 thrust earthquake that occurred on 2012 Oct 28 offshore of Haida Gwaii (formerly the Queen Charlotte Islands), in British Columbia, Canada, produced a rich and on-going aftershock sequence. Ten months of aftershock events are determined from analyst reviewed solutions and automatic detectors and locators. For automated solutions, rotating the waveforms and running P and S wave filters (Rosenberger, 2010) over them produced phase arrivals for an improved catalogue of aftershocks compared to using a traditional signal to noise ratio detector on standard vertical and horizontal component seismograms. The automated aftershock locations from the rotated waveforms are compared to the automated locations from the standard vertical and horizontal waveforms and to analyst locations (which are generally M>2.5). The best of the automated solutions are comparable in quality to analyst solutions and much more numerous making this a viable method of processing extensive aftershock sequences. They outline a region approximately 50 km wide and 100 km long, with the aftershocks in two parallel bands. Most of the aftershocks are not on the rupture surface but are in the overlying or underlying plates. It is thought that this earthquake represents the Pacific plate thrusting underneath the North America plate with the rupture surface lying beneath the sedimentary Queen Charlotte terrace and terminating to the east in the vicinity of the Queen Charlotte fault. Due to the one-sided station distribution on land, depth trades off with distance offshore, resulting in poor depth determinations. However, using ocean bottom seismometers deployed early in the aftershock sequence, depth resolution was significantly improved. First motion focal North America plate with the rupture surface lying beneath the sedimentary Queen Charlotte terrace and terminating to the east in the vicinity of the Queen Charlotte fault.mechanisms for a portion of the aftershock sequence are compared
Thomas, Rozenn; Berdjeb, Lyria; Sime-Ngando, Télesphore; Jacquet, Stéphan
2011-03-01
We have investigated the ecology of viruses in Lake Bourget (France) from January to August 2008. Data were analysed for viral and bacterial abundance and production, viral decay, frequency of lysogenic cells, the contribution of bacteriophages to prokaryotic mortality and their potential influence on nutrient dynamics. Analyses and experiments were conducted on samples from the epilimnion (2 m) and the hypolimnion (50 m), taken at the reference site of the lake. The abundance of virus-like particles (VLP) varied from 3.4 × 10⁷to 8.2 × 10⁷ VLP ml⁻¹; with the highest numbers and virus-to-bacterium ratio (VBR = 69) recorded in winter. Viral production varied from 3.2 × 10⁴ VLP ml⁻¹ h⁻¹ (July) to 2 × 10⁶ VLP ml⁻¹ h⁻¹ (February and April), and production was lower in the hypolimnion. Viral decay rate reached 0.12-0.15 day⁻¹, and this parameter varied greatly with sampling date and methodology (i.e. KCN versus filtration). Using transmission electron microscopy (TEM) analysis, viral lysis was responsible for 0% (January) to 71% (February) of bacterial mortality, while viral lysis varied between 0% (April) and 53% (January) per day when using a modified dilution approach. Calculated from viral production and burst size, the virus-induced bacterial mortality varied between 0% (January) and 68% (August). A weak relationship was found between the two first methods (TEM versus dilution approach). Interestingly, flow cytometry analysis performed on the dilution experiment samples revealed that the viral impact was mostly on high DNA content bacterial cells whereas grazing, varying between 8.3% (June) and 75.4% (April), was reflected in both HDNA and LDNA cells equally. The lysogenic fraction varied between 0% (spring/summer) and 62% (winter) of total bacterial abundance, and increased slightly with increasing amounts of mitomycin C added. High percentages of lysogenic cells were recorded when bacterial abundance and activity were the lowest
Layden, B.; Cairns, Iver H.; Robinson, P. A.
2013-08-15
Electrostatic decay of Langmuir waves into Langmuir and ion sound waves (L→L′+S) and scattering of Langmuir waves off thermal ions (L+i→L′+i′, also called “nonlinear Landau damping”) are important nonlinear weak-turbulence processes. The rates for these processes depend on the quadratic longitudinal response function α{sup (2)} (or, equivalently, the quadratic longitudinal susceptibility χ{sup (2)}), which describes the second-order response of a plasma to electrostatic wave fields. Previous calculations of these rates for an unmagnetized Maxwellian plasma have relied upon an approximate form for α{sup (2)} that is valid where two of the wave fields are fast (i.e., v{sub φ}=ω/k≫V{sub e} where ω is the angular frequency, k is the wavenumber, and V{sub e} is the electron thermal speed) and one is slow (v{sub φ}≪V{sub e}). Recently, an exact expression was derived for α{sup (2)} that is valid for any phase speeds of the three waves in an unmagnetized Maxwellian plasma. Here, this exact α{sup (2)} is applied to the calculation of the three-dimensional rates for electrostatic decay and scattering off thermal ions, and the resulting exact rates are compared with the approximate rates. The calculations are performed using previously derived three-dimensional rates for electrostatic decay given in terms of a general α{sup (2)}, and newly derived three-dimensional rates for scattering off thermal ions; the scattering rate is derived assuming a Maxwellian ion distribution, and both rates are derived assuming arc distributions for the wave spectra. For most space plasma conditions, the approximate rate is found to be accurate to better than 20%; however, for sufficiently low Langmuir phase speeds (v{sub φ}/V{sub e}≈3) appropriate to some spatial domains of the foreshock regions of planetary bow shocks and type II solar radio bursts, the use of the exact rate may be necessary for accurate calculations. The relative rates of electrostatic decay
The Mw 8.1 2014 Iquique, Chile, seismic sequence: a tale of foreshocks and aftershocks
NASA Astrophysics Data System (ADS)
Cesca, S.; Grigoli, F.; Heimann, S.; Dahm, T.; Kriegerowski, M.; Sobiesiak, M.; Tassara, C.; Olcay, M.
2016-03-01
The 2014 April 1, Mw 8.1 Iquique (Chile) earthquake struck in the Northern Chile seismic gap. With a rupture length of less than 200 km, it left unbroken large segments of the former gap. Early studies were able to model the main rupture features but results are ambiguous with respect to the role of aseismic slip and left open questions on the remaining hazard at the Northern Chile gap. A striking observation of the 2014 earthquake has been its extensive preparation phase, with more than 1300 events with magnitude above ML 3, occurring during the 15 months preceding the main shock. Increasing seismicity rates and observed peak magnitudes accompanied the last three weeks before the main shock. Thanks to the large data sets of regional recordings, we assess the precursor activity, compare foreshocks and aftershocks and model rupture preparation and rupture effects. To tackle inversion challenges for moderate events with an asymmetric network geometry, we use full waveforms techniques to locate events, map the seismicity rate and derive source parameters, obtaining moment tensors for more than 300 events (magnitudes Mw 4.0-8.1) in the period 2013 January 1-2014 April 30. This unique data set of fore- and aftershocks is investigated to distinguish rupture process models and models of strain and stress rotation during an earthquake. Results indicate that the spatial distributions of foreshocks delineated the shallower part of the rupture areas of the main shock and its largest aftershock, well matching the spatial extension of the aftershocks cloud. Most moment tensors correspond to almost pure double couple thrust mechanisms, consistent with the slab orientation. Whereas no significant differences are observed among thrust mechanisms in different areas, nor among thrust foreshocks and aftershocks, the early aftershock sequence is characterized by the presence of normal fault mechanisms, striking parallel to the trench but dipping westward. These events likely occurred
Rates of decay to non homogeneous Timoshenko model with tip body
NASA Astrophysics Data System (ADS)
Muñoz Rivera, Jaime E.; Ávila, Andrés I.
2015-05-01
We consider the uniform stabilization of a hybrid elastic model consisting of a Timoshenko beam and a tip load at the free end of the beam. Our main result proves that the semigroup eAt associated to this model is not exponentially stable. Moreover, we prove that the semigroup decays polynomially to zero as t - 1 / 2. When the damping mechanism is effective only on the boundary of the rotational angle, the solution also decays polynomially as t - 1 / 2 provided the wave speeds are equal. Otherwise it decays as t - 1 / 4 for any initial data taken in D (A).
Decay rates of Gaussian-type I-balls and Bose-enhancement effects in 3+1 dimensions
Kawasaki, Masahiro; Yamada, Masaki E-mail: yamadam@icrr.u-tokyo.ac.jp
2014-02-01
I-balls/oscillons are long-lived spatially localized lumps of a scalar field which may be formed after inflation. In the scalar field theory with monomial potential nearly and shallower than quadratic, which is motivated by chaotic inflationary models and supersymmetric theories, the scalar field configuration of I-balls is approximately Gaussian. If the I-ball interacts with another scalar field, the I-ball eventually decays into radiation. Recently, it was pointed out that the decay rate of I-balls increases exponentially by the effects of Bose enhancement under some conditions and a non-perturbative method to compute the exponential growth rate has been derived. In this paper, we apply the method to the Gaussian-type I-ball in 3+1 dimensions assuming spherical symmetry, and calculate the partial decay rates into partial waves, labelled by the angular momentum of daughter particles. We reveal the conditions that the I-ball decays exponentially, which are found to depend on the mass and angular momentum of daughter particles and also be affected by the quantum uncertainty in the momentum of daughter particles.
Decay rates of Gaussian-type I-balls and Bose-enhancement effects in 3+1 dimensions
Kawasaki, Masahiro; Yamada, Masaki
2014-02-03
I-balls/oscillons are long-lived spatially localized lumps of a scalar field which may be formed after inflation. In the scalar field theory with monomial potential nearly and shallower than quadratic, which is motivated by chaotic inflationary models and supersymmetric theories, the scalar field configuration of I-balls is approximately Gaussian. If the I-ball interacts with another scalar field, the I-ball eventually decays into radiation. Recently, it was pointed out that the decay rate of I-balls increases exponentially by the effects of Bose enhancement under some conditions and a non-perturbative method to compute the exponential growth rate has been derived. In this paper, we apply the method to the Gaussian-type I-ball in 3+1 dimensions assuming spherical symmetry, and calculate the partial decay rates into partial waves, labelled by the angular momentum of daughter particles. We reveal the conditions that the I-ball decays exponentially, which are found to depend on the mass and angular momentum of daughter particles and also be affected by the quantum uncertainty in the momentum of daughter particles.
The effect of restoration of broken SU(4) symmetry on 2 νβ-β- decay rates
NASA Astrophysics Data System (ADS)
Ünlü, Serdar; Çakmak, Neçla
2015-07-01
The effect of restoration of SU(4) symmetry violations stemming from the mean field approximation on the 2 νβ-β- decay amplitudes and half-lives for 76Ge →76Se, 82Se →82Kr, 96Zr →96Mo and 100Mo →100Ru decay systems is investigated within the framework of the proton-neutron quasi-particle random phase approximation (pnQRPA) method. In this respect, the broken SU(4) symmetry property of the central quasi-particle mean field term is restored by using Pyatov's restoration method. In order to see the influence of restoration on the stability of the nuclear matrix element, the variation of the nuclear matrix element with particle-particle strength parameter is computed within and without restoration. The calculated decay rates within restoration are compared with the schematic and shell model estimates.
DNA decay rate in papyri and human remains from Egyptian archaeological sites.
Marota, Isolina; Basile, Corrado; Ubaldi, Massimo; Rollo, Franco
2002-04-01
The writing sheets made with strips from the stem (caulis) of papyri (Cyperus papyrus) are one of the most ingenious products of ancient technology. We extracted DNA from samples of modern papyri varying in age from 0-100 years BP and from ancient specimens from Egypt, with an age-span from 1,300-3,200 years BP. The copy number of the plant chloroplast DNA in the sheets was determined using a competitive PCR system designed on the basis of a short (90 bp) tract of the chloroplast's ribulose bisphosphate carboxylase large subunit (rbcL) gene sequence. The results allowed us to establish that the DNA half-life in papyri is about 19-24 years. This means that the last DNA fragments will vanish within no more than 532-672 years from the sheets being manufactured. In a parallel investigation, we checked the archaeological specimens for the presence of residual DNA and determined the extent of racemization of aspartic (Asp) acid in both modern and ancient specimens, as a previous report (Poinar et al. [1996], Science 272:864-866) showed that racemization of aspartic acid and DNA decay are linked. The results confirmed the complete loss of authentic DNA, even in the less ancient (8th century AD) papyri. On the other hand, when the regression for Asp racemization rates in papyri was compared with that for human and animal remains from Egyptian archaeological sites, it proved, quite surprisingly, that the regressions are virtually identical. Our study provides an indirect argument against the reliability of claims about the recovery of authentic DNA from Egyptian mummies and bone remains. PMID:11920366